Nov 28 16:58:29 crc systemd[1]: Starting Kubernetes Kubelet... Nov 28 16:58:29 crc restorecon[4679]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:29 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 28 16:58:30 crc restorecon[4679]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 28 16:58:30 crc kubenswrapper[4710]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 16:58:30 crc kubenswrapper[4710]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 28 16:58:30 crc kubenswrapper[4710]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 16:58:30 crc kubenswrapper[4710]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 16:58:30 crc kubenswrapper[4710]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 28 16:58:30 crc kubenswrapper[4710]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.970881 4710 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974584 4710 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974601 4710 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974606 4710 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974610 4710 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974613 4710 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974617 4710 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974621 4710 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974624 4710 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974629 4710 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974633 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974637 4710 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974641 4710 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974647 4710 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974652 4710 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974656 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974661 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974665 4710 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974669 4710 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974673 4710 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974677 4710 feature_gate.go:330] unrecognized feature gate: Example Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974681 4710 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974684 4710 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974688 4710 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974692 4710 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974696 4710 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974700 4710 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974703 4710 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974707 4710 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974710 4710 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974715 4710 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974719 4710 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974723 4710 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974728 4710 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974732 4710 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974736 4710 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974740 4710 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974744 4710 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974747 4710 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974752 4710 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974772 4710 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974782 4710 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974788 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974793 4710 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974798 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974804 4710 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974809 4710 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974813 4710 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974817 4710 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974820 4710 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974824 4710 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974827 4710 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974832 4710 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974835 4710 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974838 4710 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974842 4710 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974846 4710 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974849 4710 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974852 4710 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974856 4710 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974859 4710 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974863 4710 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974867 4710 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974870 4710 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974874 4710 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974877 4710 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974880 4710 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974884 4710 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974889 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974892 4710 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974896 4710 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.974899 4710 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.974976 4710 flags.go:64] FLAG: --address="0.0.0.0" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.974985 4710 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.974992 4710 flags.go:64] FLAG: --anonymous-auth="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.974997 4710 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975004 4710 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975009 4710 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975015 4710 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975020 4710 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975025 4710 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975029 4710 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975035 4710 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975040 4710 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975045 4710 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975050 4710 flags.go:64] FLAG: --cgroup-root="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975054 4710 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975059 4710 flags.go:64] FLAG: --client-ca-file="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975063 4710 flags.go:64] FLAG: --cloud-config="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975067 4710 flags.go:64] FLAG: --cloud-provider="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975071 4710 flags.go:64] FLAG: --cluster-dns="[]" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975077 4710 flags.go:64] FLAG: --cluster-domain="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975081 4710 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975086 4710 flags.go:64] FLAG: --config-dir="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975090 4710 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975094 4710 flags.go:64] FLAG: --container-log-max-files="5" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975099 4710 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975103 4710 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975108 4710 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975112 4710 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975116 4710 flags.go:64] FLAG: --contention-profiling="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975121 4710 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975125 4710 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975129 4710 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975133 4710 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975138 4710 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975143 4710 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975147 4710 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975151 4710 flags.go:64] FLAG: --enable-load-reader="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975155 4710 flags.go:64] FLAG: --enable-server="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975160 4710 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975164 4710 flags.go:64] FLAG: --event-burst="100" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975169 4710 flags.go:64] FLAG: --event-qps="50" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975173 4710 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975177 4710 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975181 4710 flags.go:64] FLAG: --eviction-hard="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975186 4710 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975190 4710 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975194 4710 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975198 4710 flags.go:64] FLAG: --eviction-soft="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975202 4710 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975206 4710 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975211 4710 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975215 4710 flags.go:64] FLAG: --experimental-mounter-path="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975219 4710 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975223 4710 flags.go:64] FLAG: --fail-swap-on="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975227 4710 flags.go:64] FLAG: --feature-gates="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975232 4710 flags.go:64] FLAG: --file-check-frequency="20s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975236 4710 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975240 4710 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975244 4710 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975248 4710 flags.go:64] FLAG: --healthz-port="10248" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975256 4710 flags.go:64] FLAG: --help="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975260 4710 flags.go:64] FLAG: --hostname-override="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975264 4710 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975268 4710 flags.go:64] FLAG: --http-check-frequency="20s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975272 4710 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975276 4710 flags.go:64] FLAG: --image-credential-provider-config="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975280 4710 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975285 4710 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975289 4710 flags.go:64] FLAG: --image-service-endpoint="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975293 4710 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975297 4710 flags.go:64] FLAG: --kube-api-burst="100" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975301 4710 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975305 4710 flags.go:64] FLAG: --kube-api-qps="50" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975309 4710 flags.go:64] FLAG: --kube-reserved="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975313 4710 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975317 4710 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975322 4710 flags.go:64] FLAG: --kubelet-cgroups="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975326 4710 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975331 4710 flags.go:64] FLAG: --lock-file="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975334 4710 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975338 4710 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975343 4710 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975349 4710 flags.go:64] FLAG: --log-json-split-stream="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975353 4710 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975357 4710 flags.go:64] FLAG: --log-text-split-stream="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975361 4710 flags.go:64] FLAG: --logging-format="text" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975365 4710 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975370 4710 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975374 4710 flags.go:64] FLAG: --manifest-url="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975378 4710 flags.go:64] FLAG: --manifest-url-header="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975383 4710 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975387 4710 flags.go:64] FLAG: --max-open-files="1000000" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975393 4710 flags.go:64] FLAG: --max-pods="110" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975397 4710 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975402 4710 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975406 4710 flags.go:64] FLAG: --memory-manager-policy="None" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975410 4710 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975414 4710 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975418 4710 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975423 4710 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975432 4710 flags.go:64] FLAG: --node-status-max-images="50" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975436 4710 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975440 4710 flags.go:64] FLAG: --oom-score-adj="-999" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975444 4710 flags.go:64] FLAG: --pod-cidr="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975448 4710 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975455 4710 flags.go:64] FLAG: --pod-manifest-path="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975459 4710 flags.go:64] FLAG: --pod-max-pids="-1" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975463 4710 flags.go:64] FLAG: --pods-per-core="0" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975467 4710 flags.go:64] FLAG: --port="10250" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975471 4710 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975475 4710 flags.go:64] FLAG: --provider-id="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975479 4710 flags.go:64] FLAG: --qos-reserved="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975484 4710 flags.go:64] FLAG: --read-only-port="10255" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975488 4710 flags.go:64] FLAG: --register-node="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975492 4710 flags.go:64] FLAG: --register-schedulable="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975496 4710 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975503 4710 flags.go:64] FLAG: --registry-burst="10" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975507 4710 flags.go:64] FLAG: --registry-qps="5" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975511 4710 flags.go:64] FLAG: --reserved-cpus="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975515 4710 flags.go:64] FLAG: --reserved-memory="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975520 4710 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975524 4710 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975529 4710 flags.go:64] FLAG: --rotate-certificates="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975534 4710 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975541 4710 flags.go:64] FLAG: --runonce="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975546 4710 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975551 4710 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975557 4710 flags.go:64] FLAG: --seccomp-default="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975561 4710 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975565 4710 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975570 4710 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975574 4710 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975579 4710 flags.go:64] FLAG: --storage-driver-password="root" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975584 4710 flags.go:64] FLAG: --storage-driver-secure="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975588 4710 flags.go:64] FLAG: --storage-driver-table="stats" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975592 4710 flags.go:64] FLAG: --storage-driver-user="root" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975596 4710 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975600 4710 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975604 4710 flags.go:64] FLAG: --system-cgroups="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975608 4710 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975614 4710 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975618 4710 flags.go:64] FLAG: --tls-cert-file="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975622 4710 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975627 4710 flags.go:64] FLAG: --tls-min-version="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975631 4710 flags.go:64] FLAG: --tls-private-key-file="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975635 4710 flags.go:64] FLAG: --topology-manager-policy="none" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975640 4710 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975644 4710 flags.go:64] FLAG: --topology-manager-scope="container" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975648 4710 flags.go:64] FLAG: --v="2" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975654 4710 flags.go:64] FLAG: --version="false" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975659 4710 flags.go:64] FLAG: --vmodule="" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975664 4710 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.975668 4710 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975784 4710 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975789 4710 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975793 4710 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975799 4710 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975803 4710 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975806 4710 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975810 4710 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975813 4710 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975817 4710 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975820 4710 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975824 4710 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975829 4710 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975834 4710 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975838 4710 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975842 4710 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975846 4710 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975851 4710 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975855 4710 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975859 4710 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975863 4710 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975867 4710 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975871 4710 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975875 4710 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975878 4710 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975882 4710 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975885 4710 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975889 4710 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975894 4710 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975898 4710 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975902 4710 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975905 4710 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975909 4710 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975913 4710 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975916 4710 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975920 4710 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975924 4710 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975927 4710 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975931 4710 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975934 4710 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975938 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975941 4710 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975945 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975948 4710 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975953 4710 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975957 4710 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975961 4710 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975965 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975970 4710 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975974 4710 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975978 4710 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975982 4710 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975987 4710 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975991 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975994 4710 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.975998 4710 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976002 4710 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976005 4710 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976009 4710 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976012 4710 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976017 4710 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976021 4710 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976025 4710 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976028 4710 feature_gate.go:330] unrecognized feature gate: Example Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976032 4710 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976036 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976039 4710 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976043 4710 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976047 4710 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976051 4710 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976054 4710 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.976058 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.976064 4710 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.986469 4710 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.986502 4710 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986662 4710 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986683 4710 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986691 4710 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986700 4710 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986707 4710 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986714 4710 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986721 4710 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986727 4710 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986734 4710 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986741 4710 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986748 4710 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986755 4710 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986782 4710 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986789 4710 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986796 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986803 4710 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986810 4710 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986819 4710 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986828 4710 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986835 4710 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986842 4710 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986849 4710 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986869 4710 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986878 4710 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986887 4710 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986894 4710 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986901 4710 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986907 4710 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986913 4710 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986920 4710 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986927 4710 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986933 4710 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986940 4710 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986947 4710 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986954 4710 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986960 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986967 4710 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986973 4710 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986980 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986989 4710 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.986997 4710 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987005 4710 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987014 4710 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987020 4710 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987026 4710 feature_gate.go:330] unrecognized feature gate: Example Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987033 4710 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987042 4710 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987051 4710 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987059 4710 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987065 4710 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987073 4710 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987079 4710 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987087 4710 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987094 4710 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987100 4710 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987107 4710 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987114 4710 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987120 4710 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987139 4710 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987146 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987152 4710 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987159 4710 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987166 4710 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987172 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987179 4710 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987185 4710 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987192 4710 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987198 4710 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987204 4710 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987211 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987217 4710 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.987227 4710 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987455 4710 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987467 4710 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987474 4710 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987480 4710 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987487 4710 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987494 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987501 4710 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987507 4710 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987514 4710 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987520 4710 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987527 4710 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987535 4710 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987542 4710 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987549 4710 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987556 4710 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987565 4710 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987573 4710 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987581 4710 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987588 4710 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987595 4710 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987603 4710 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987610 4710 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987629 4710 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987637 4710 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987643 4710 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987649 4710 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987656 4710 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987662 4710 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987669 4710 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987675 4710 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987682 4710 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987688 4710 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987695 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987701 4710 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987708 4710 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987714 4710 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987721 4710 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987727 4710 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987734 4710 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987740 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987747 4710 feature_gate.go:330] unrecognized feature gate: Example Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987753 4710 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987783 4710 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987789 4710 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987799 4710 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987805 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987813 4710 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987820 4710 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987826 4710 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987833 4710 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987840 4710 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987850 4710 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987858 4710 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987866 4710 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987873 4710 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987880 4710 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987889 4710 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987896 4710 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987916 4710 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987923 4710 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987930 4710 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987938 4710 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987945 4710 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987954 4710 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987962 4710 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987969 4710 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987976 4710 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987983 4710 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987990 4710 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.987996 4710 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 28 16:58:30 crc kubenswrapper[4710]: W1128 16:58:30.988003 4710 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.988013 4710 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.988545 4710 server.go:940] "Client rotation is on, will bootstrap in background" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.992261 4710 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.992429 4710 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.993388 4710 server.go:997] "Starting client certificate rotation" Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.993446 4710 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.993694 4710 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-07 23:34:34.086323281 +0000 UTC Nov 28 16:58:30 crc kubenswrapper[4710]: I1128 16:58:30.994008 4710 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.002380 4710 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.003727 4710 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.005821 4710 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.018541 4710 log.go:25] "Validated CRI v1 runtime API" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.044674 4710 log.go:25] "Validated CRI v1 image API" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.046670 4710 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.049072 4710 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-28-16-54-11-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.049102 4710 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.063981 4710 manager.go:217] Machine: {Timestamp:2025-11-28 16:58:31.061850835 +0000 UTC m=+0.320150900 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:56ee7c25-214c-4ce4-aeb2-2eaf54b784dc BootID:a3da3522-f4c2-42e2-89ac-39d27db90382 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:5f:7f:20 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:5f:7f:20 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cb:04:44 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:87:51:66 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4f:c5:eb Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:4d:38:73 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3a:3f:46:bd:12:52 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:96:4d:8b:17:08:be Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.064189 4710 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.064293 4710 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.065450 4710 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.065642 4710 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.065677 4710 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.065904 4710 topology_manager.go:138] "Creating topology manager with none policy" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.065916 4710 container_manager_linux.go:303] "Creating device plugin manager" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.066103 4710 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.066130 4710 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.066344 4710 state_mem.go:36] "Initialized new in-memory state store" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.066458 4710 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.067130 4710 kubelet.go:418] "Attempting to sync node with API server" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.067150 4710 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.067171 4710 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.067183 4710 kubelet.go:324] "Adding apiserver pod source" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.067193 4710 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.068774 4710 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.069127 4710 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.069810 4710 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070270 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070313 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070324 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070331 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070345 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070356 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070363 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070374 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070385 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070396 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070406 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070413 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.070599 4710 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.070744 4710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.070744 4710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.070866 4710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.070911 4710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.071038 4710 server.go:1280] "Started kubelet" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.071291 4710 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 28 16:58:31 crc systemd[1]: Started Kubernetes Kubelet. Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.072877 4710 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.072960 4710 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.075818 4710 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.077035 4710 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.077094 4710 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.076631 4710 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.205:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187c3a2b2929aa81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 16:58:31.070993025 +0000 UTC m=+0.329293080,LastTimestamp:2025-11-28 16:58:31.070993025 +0000 UTC m=+0.329293080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.077122 4710 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:07:02.078624618 +0000 UTC Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.077286 4710 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.077306 4710 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.077422 4710 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.079019 4710 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.079535 4710 factory.go:55] Registering systemd factory Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.079568 4710 factory.go:221] Registration of the systemd container factory successfully Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.081939 4710 server.go:460] "Adding debug handlers to kubelet server" Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.081897 4710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.081986 4710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.082024 4710 factory.go:153] Registering CRI-O factory Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.082060 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="200ms" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.082071 4710 factory.go:221] Registration of the crio container factory successfully Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.082143 4710 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.082163 4710 factory.go:103] Registering Raw factory Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.082177 4710 manager.go:1196] Started watching for new ooms in manager Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.082689 4710 manager.go:319] Starting recovery of all containers Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.089400 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.089708 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.089817 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.089982 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090065 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090161 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090236 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090388 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090481 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090556 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090630 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090717 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090811 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090938 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.090997 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091064 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091163 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091219 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091301 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091832 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091908 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091930 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091962 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091977 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.091995 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092008 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092034 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092051 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092070 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092081 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092119 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092139 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092154 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092178 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092193 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092215 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092378 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092414 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092454 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092472 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092495 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092509 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092528 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092554 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092571 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092593 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092613 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092634 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092654 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092671 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092694 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092709 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092740 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092783 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092804 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092826 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092845 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092872 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092888 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092905 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092924 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092940 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092962 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092979 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.092995 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093019 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093037 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093586 4710 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093630 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093651 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093667 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093700 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093716 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093729 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093750 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093788 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093808 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093824 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093894 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093971 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.093997 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094037 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094277 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094339 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094363 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094385 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094409 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094428 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094452 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094473 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094491 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094518 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094538 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094566 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094586 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094606 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094631 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094653 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094676 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094696 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094714 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094741 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094781 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094803 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094830 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094926 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.094987 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095017 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095040 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095067 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095091 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095108 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095130 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095147 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095167 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095183 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095198 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095211 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095230 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095243 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095311 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095333 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095349 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095370 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095385 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095400 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095419 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095435 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095451 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095470 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095483 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095502 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095517 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095533 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095552 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095566 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095583 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095598 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095610 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095627 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095669 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095691 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095704 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095719 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095736 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095771 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095797 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095813 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095826 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095844 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095861 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095875 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095891 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095904 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095920 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095934 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095949 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095965 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095978 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.095997 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096012 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096024 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096042 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096058 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096081 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096103 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096119 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096137 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096149 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096167 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096180 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096192 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096209 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096225 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096241 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096255 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096268 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096285 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096299 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096312 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096328 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096341 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096361 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096373 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096386 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096403 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096416 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096432 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096447 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096460 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096477 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096494 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096511 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096527 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096632 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096659 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096683 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096700 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096715 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096735 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096750 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096791 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096806 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096821 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096839 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096853 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096871 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096885 4710 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096898 4710 reconstruct.go:97] "Volume reconstruction finished" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.096908 4710 reconciler.go:26] "Reconciler: start to sync state" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.106163 4710 manager.go:324] Recovery completed Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.116714 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.118211 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.118247 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.118256 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.118935 4710 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.118951 4710 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.118972 4710 state_mem.go:36] "Initialized new in-memory state store" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.137134 4710 policy_none.go:49] "None policy: Start" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.138386 4710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.138606 4710 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.138640 4710 state_mem.go:35] "Initializing new in-memory state store" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.140141 4710 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.140179 4710 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.140204 4710 kubelet.go:2335] "Starting kubelet main sync loop" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.140309 4710 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.141080 4710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.141131 4710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.177749 4710 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.185835 4710 manager.go:334] "Starting Device Plugin manager" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.185901 4710 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.185919 4710 server.go:79] "Starting device plugin registration server" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.186386 4710 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.186433 4710 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.186665 4710 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.186746 4710 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.186777 4710 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.194038 4710 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.241480 4710 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.241571 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.242890 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.242944 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.242957 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.243086 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.243388 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.243450 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244357 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244385 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244397 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244411 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244435 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244445 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244587 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244616 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.244861 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.245719 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.245784 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.245799 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.245845 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.245890 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.245907 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.246112 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.246176 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.246207 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247143 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247178 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247190 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247535 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247581 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247599 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247738 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247872 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.247906 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.248550 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.248599 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.248609 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.249153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.249180 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.249192 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.249342 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.249381 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.250052 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.250072 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.250081 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.282802 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="400ms" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.286903 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.289520 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.289557 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.289567 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.289587 4710 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.290093 4710 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.205:6443: connect: connection refused" node="crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.299808 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.299901 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.299970 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300062 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300099 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300217 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300271 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300295 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300321 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300361 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300413 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300452 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300483 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300504 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.300519 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401319 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401454 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401548 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401556 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401596 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401637 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401687 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401684 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401731 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401865 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401938 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.401986 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402051 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402146 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402180 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402215 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402326 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402342 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402398 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402447 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402468 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402269 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402501 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402529 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402524 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402547 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402590 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402671 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402744 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.402839 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.490675 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.492355 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.492427 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.492446 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.492480 4710 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.493124 4710 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.205:6443: connect: connection refused" node="crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.574792 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.585553 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.601161 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.602120 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b57424a54197dc7b20c1cf5451e4fdbd2d3eea007656173c57fbbbb27e2c0eeb WatchSource:0}: Error finding container b57424a54197dc7b20c1cf5451e4fdbd2d3eea007656173c57fbbbb27e2c0eeb: Status 404 returned error can't find the container with id b57424a54197dc7b20c1cf5451e4fdbd2d3eea007656173c57fbbbb27e2c0eeb Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.607491 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-5a58799be75745f2fab74b2e8624361c8b8f5643e3a7aa6ff70a0ec430af6357 WatchSource:0}: Error finding container 5a58799be75745f2fab74b2e8624361c8b8f5643e3a7aa6ff70a0ec430af6357: Status 404 returned error can't find the container with id 5a58799be75745f2fab74b2e8624361c8b8f5643e3a7aa6ff70a0ec430af6357 Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.610923 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.615859 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.622937 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8a3186ef068ee63d8877ca4c5e475621bc81288e38d70865e40993cf3cdd3ee3 WatchSource:0}: Error finding container 8a3186ef068ee63d8877ca4c5e475621bc81288e38d70865e40993cf3cdd3ee3: Status 404 returned error can't find the container with id 8a3186ef068ee63d8877ca4c5e475621bc81288e38d70865e40993cf3cdd3ee3 Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.628940 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ec13fe8d28db1e2bba7f26eabf285674f76ae8e3982c1e7d27390b8edaa1a868 WatchSource:0}: Error finding container ec13fe8d28db1e2bba7f26eabf285674f76ae8e3982c1e7d27390b8edaa1a868: Status 404 returned error can't find the container with id ec13fe8d28db1e2bba7f26eabf285674f76ae8e3982c1e7d27390b8edaa1a868 Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.638469 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-67cd91911de37c38078f6a9de3819c94a33001950f17e8c3ba0ce59c4f4ef3b8 WatchSource:0}: Error finding container 67cd91911de37c38078f6a9de3819c94a33001950f17e8c3ba0ce59c4f4ef3b8: Status 404 returned error can't find the container with id 67cd91911de37c38078f6a9de3819c94a33001950f17e8c3ba0ce59c4f4ef3b8 Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.683928 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="800ms" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.893518 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.894030 4710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.894092 4710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.894619 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.894653 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.894664 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:31 crc kubenswrapper[4710]: I1128 16:58:31.894689 4710 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.894983 4710 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.205:6443: connect: connection refused" node="crc" Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.910499 4710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.910542 4710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:31 crc kubenswrapper[4710]: W1128 16:58:31.920026 4710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:31 crc kubenswrapper[4710]: E1128 16:58:31.920062 4710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.076906 4710 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.077674 4710 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:19:22.409193528 +0000 UTC Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.144528 4710 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8" exitCode=0 Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.144588 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.144669 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"67cd91911de37c38078f6a9de3819c94a33001950f17e8c3ba0ce59c4f4ef3b8"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.144744 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.145575 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.145610 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.145619 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.146469 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.146532 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec13fe8d28db1e2bba7f26eabf285674f76ae8e3982c1e7d27390b8edaa1a868"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.148829 4710 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f" exitCode=0 Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.148886 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.148909 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8a3186ef068ee63d8877ca4c5e475621bc81288e38d70865e40993cf3cdd3ee3"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.148980 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.149536 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.149562 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.149573 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.150206 4710 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="83d0462e809787c7ee59df52245d86e49cba4bbd86a9f112b04bb19f7494bb13" exitCode=0 Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.150226 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"83d0462e809787c7ee59df52245d86e49cba4bbd86a9f112b04bb19f7494bb13"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.150246 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5a58799be75745f2fab74b2e8624361c8b8f5643e3a7aa6ff70a0ec430af6357"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.150331 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.151360 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.151385 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.151396 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.151540 4710 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0" exitCode=0 Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.151605 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.151630 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b57424a54197dc7b20c1cf5451e4fdbd2d3eea007656173c57fbbbb27e2c0eeb"} Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.151681 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.152272 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.152292 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.152302 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.152839 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.153736 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.153768 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.153778 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:32 crc kubenswrapper[4710]: W1128 16:58:32.241957 4710 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:32 crc kubenswrapper[4710]: E1128 16:58:32.242048 4710 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:32 crc kubenswrapper[4710]: E1128 16:58:32.484903 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="1.6s" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.696094 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.697471 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.697508 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.697519 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:32 crc kubenswrapper[4710]: I1128 16:58:32.697581 4710 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:32 crc kubenswrapper[4710]: E1128 16:58:32.698181 4710 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.205:6443: connect: connection refused" node="crc" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.070100 4710 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 16:58:33 crc kubenswrapper[4710]: E1128 16:58:33.071160 4710 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.076803 4710 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.077852 4710 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:22:29.70293175 +0000 UTC Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.077915 4710 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 56h23m56.625022724s for next certificate rotation Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.156259 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.156327 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.156342 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.156353 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.156470 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.156974 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.157188 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.157213 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.157223 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.158949 4710 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fa47e3a261fd3b9c5f2000aded5ed0c2c615ce19184eeb2347cb45226f59c66c" exitCode=0 Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.158997 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fa47e3a261fd3b9c5f2000aded5ed0c2c615ce19184eeb2347cb45226f59c66c"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.159082 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.159739 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.159773 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.159783 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.162476 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.162559 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.163222 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.163246 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.163257 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.165528 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.165554 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.165576 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.165645 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.167960 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.167985 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.167994 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.170081 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.170110 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.170124 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a"} Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.170189 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.170778 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.170804 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.170814 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.610929 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:33 crc kubenswrapper[4710]: I1128 16:58:33.616057 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.177512 4710 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ea49891b2be27254e0f0f9f080fcfb07fc3422f84280874e65a72b0ff0923da2" exitCode=0 Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.177580 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ea49891b2be27254e0f0f9f080fcfb07fc3422f84280874e65a72b0ff0923da2"} Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.177681 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.177730 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.177832 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.177864 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.179503 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.179561 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.179580 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.179559 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.179622 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.179644 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.180271 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.180331 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.180356 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.299330 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.301172 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.301224 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.301237 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.301318 4710 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:34 crc kubenswrapper[4710]: I1128 16:58:34.865684 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.182823 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"256c18ad759964d406dfd584146ebbf047ce76da5653b57525dc4af87abb26d5"} Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.182871 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"716ff1740066805455149ad26283a4af5cb2b3abde96b98a0af5ee67beb7acbb"} Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.182877 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.182876 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.182924 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.182925 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.182886 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"119ebc7dae5ae187bc8ad8916e37e91f8c50698c310378812cb3d1e6e891b2e8"} Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.183711 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.183738 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.183749 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.183826 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.183856 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.183865 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:35 crc kubenswrapper[4710]: I1128 16:58:35.391276 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.192134 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"55e70db5f9b06844dedd9b91aea50a049edd6088de5b2c6599a3f55dcd46df97"} Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.192195 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.192269 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.192278 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.192201 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"086989a0ff461a582c25825c47210ee74f30d22f2f0ef454c9b2118bc1993e02"} Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.193552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.193629 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.193653 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.194258 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.194310 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.194322 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.910707 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.910935 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.911013 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.912751 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.912821 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:36 crc kubenswrapper[4710]: I1128 16:58:36.912832 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:37 crc kubenswrapper[4710]: I1128 16:58:37.142519 4710 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 28 16:58:37 crc kubenswrapper[4710]: I1128 16:58:37.194404 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:37 crc kubenswrapper[4710]: I1128 16:58:37.195343 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:37 crc kubenswrapper[4710]: I1128 16:58:37.195379 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:37 crc kubenswrapper[4710]: I1128 16:58:37.195392 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:37 crc kubenswrapper[4710]: I1128 16:58:37.344448 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 28 16:58:37 crc kubenswrapper[4710]: I1128 16:58:37.664860 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 28 16:58:38 crc kubenswrapper[4710]: I1128 16:58:38.197749 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:38 crc kubenswrapper[4710]: I1128 16:58:38.199434 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:38 crc kubenswrapper[4710]: I1128 16:58:38.199477 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:38 crc kubenswrapper[4710]: I1128 16:58:38.199486 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:38 crc kubenswrapper[4710]: I1128 16:58:38.391981 4710 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 16:58:38 crc kubenswrapper[4710]: I1128 16:58:38.392107 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.200098 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.201244 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.201317 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.201337 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.899115 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.899477 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.900931 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.900987 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:39 crc kubenswrapper[4710]: I1128 16:58:39.900999 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.466970 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.467240 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.469617 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.469665 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.469677 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.935368 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.935493 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.935530 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.936679 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.936708 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.936717 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:40 crc kubenswrapper[4710]: I1128 16:58:40.956980 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:41 crc kubenswrapper[4710]: E1128 16:58:41.195774 4710 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 28 16:58:41 crc kubenswrapper[4710]: I1128 16:58:41.204979 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:41 crc kubenswrapper[4710]: I1128 16:58:41.205983 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:41 crc kubenswrapper[4710]: I1128 16:58:41.206012 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:41 crc kubenswrapper[4710]: I1128 16:58:41.206023 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:41 crc kubenswrapper[4710]: I1128 16:58:41.211255 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:42 crc kubenswrapper[4710]: I1128 16:58:42.208635 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:42 crc kubenswrapper[4710]: I1128 16:58:42.209714 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:42 crc kubenswrapper[4710]: I1128 16:58:42.209739 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:42 crc kubenswrapper[4710]: I1128 16:58:42.209749 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:43 crc kubenswrapper[4710]: I1128 16:58:43.970008 4710 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 16:58:43 crc kubenswrapper[4710]: I1128 16:58:43.970068 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 16:58:43 crc kubenswrapper[4710]: I1128 16:58:43.974685 4710 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 28 16:58:43 crc kubenswrapper[4710]: I1128 16:58:43.974773 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 28 16:58:44 crc kubenswrapper[4710]: I1128 16:58:44.871971 4710 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]log ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]etcd ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/priority-and-fairness-filter ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-apiextensions-informers ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-apiextensions-controllers ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/crd-informer-synced ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-system-namespaces-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 28 16:58:44 crc kubenswrapper[4710]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 28 16:58:44 crc kubenswrapper[4710]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/bootstrap-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/start-kube-aggregator-informers ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/apiservice-registration-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/apiservice-discovery-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]autoregister-completion ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/apiservice-openapi-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 28 16:58:44 crc kubenswrapper[4710]: livez check failed Nov 28 16:58:44 crc kubenswrapper[4710]: I1128 16:58:44.872043 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 16:58:47 crc kubenswrapper[4710]: I1128 16:58:47.696845 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 28 16:58:47 crc kubenswrapper[4710]: I1128 16:58:47.697027 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:47 crc kubenswrapper[4710]: I1128 16:58:47.698687 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:47 crc kubenswrapper[4710]: I1128 16:58:47.698737 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:47 crc kubenswrapper[4710]: I1128 16:58:47.698749 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:47 crc kubenswrapper[4710]: I1128 16:58:47.714847 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.225194 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.226549 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.226601 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.226617 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.392585 4710 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.392818 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 16:58:48 crc kubenswrapper[4710]: E1128 16:58:48.938985 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.941522 4710 trace.go:236] Trace[1602071430]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 16:58:34.925) (total time: 14015ms): Nov 28 16:58:48 crc kubenswrapper[4710]: Trace[1602071430]: ---"Objects listed" error: 14015ms (16:58:48.941) Nov 28 16:58:48 crc kubenswrapper[4710]: Trace[1602071430]: [14.015980839s] [14.015980839s] END Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.941549 4710 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.941685 4710 trace.go:236] Trace[304888310]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 16:58:35.403) (total time: 13537ms): Nov 28 16:58:48 crc kubenswrapper[4710]: Trace[304888310]: ---"Objects listed" error: 13537ms (16:58:48.941) Nov 28 16:58:48 crc kubenswrapper[4710]: Trace[304888310]: [13.537899311s] [13.537899311s] END Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.941710 4710 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.942868 4710 trace.go:236] Trace[715670087]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 16:58:34.863) (total time: 14079ms): Nov 28 16:58:48 crc kubenswrapper[4710]: Trace[715670087]: ---"Objects listed" error: 14079ms (16:58:48.942) Nov 28 16:58:48 crc kubenswrapper[4710]: Trace[715670087]: [14.079293675s] [14.079293675s] END Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.942884 4710 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.943063 4710 trace.go:236] Trace[897582930]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Nov-2025 16:58:34.268) (total time: 14674ms): Nov 28 16:58:48 crc kubenswrapper[4710]: Trace[897582930]: ---"Objects listed" error: 14674ms (16:58:48.942) Nov 28 16:58:48 crc kubenswrapper[4710]: Trace[897582930]: [14.674197162s] [14.674197162s] END Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.943078 4710 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.943602 4710 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 28 16:58:48 crc kubenswrapper[4710]: E1128 16:58:48.943737 4710 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 28 16:58:48 crc kubenswrapper[4710]: I1128 16:58:48.973667 4710 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.071611 4710 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39658->192.168.126.11:17697: read: connection reset by peer" start-of-body= Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.071735 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:39658->192.168.126.11:17697: read: connection reset by peer" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.076492 4710 apiserver.go:52] "Watching apiserver" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.078800 4710 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.079127 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.079628 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.079632 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.079698 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.079732 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.079814 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.079874 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.080011 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.080129 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.080167 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.080935 4710 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.082566 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.082666 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.082728 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.082735 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.082740 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.082667 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.082737 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.082785 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.083078 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.107717 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.122656 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.132543 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.141483 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.143976 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144009 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144028 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144044 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144060 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144076 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144092 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144107 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144124 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144142 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144160 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144176 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144190 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144204 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144219 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144255 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144269 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144286 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144302 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144316 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144332 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144346 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144362 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144378 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144395 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144411 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144428 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144443 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144460 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144476 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144492 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144508 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144524 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144541 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144559 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144574 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144588 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144604 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144620 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144640 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144657 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144671 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144710 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144726 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144743 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144778 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144796 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144813 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144828 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144844 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144861 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144877 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144893 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144908 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144924 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144940 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144956 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144966 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.144993 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145012 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145028 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145046 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145063 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145080 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145097 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145135 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145155 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145173 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145190 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145207 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145230 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145247 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145263 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145282 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145300 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145317 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145332 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145348 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145366 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145381 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145399 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145415 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145431 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145448 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145464 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145482 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145498 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145514 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145529 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145546 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145560 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145576 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145592 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145608 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145640 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145656 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145789 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145814 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145831 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145853 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145876 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145897 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145913 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145931 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145952 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146297 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146322 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146348 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146373 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146399 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146461 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146487 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146511 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146547 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146571 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146595 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146621 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146645 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146668 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146690 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146714 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146736 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146775 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146808 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146833 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146858 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146881 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146905 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146927 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146952 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146974 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146997 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147023 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147046 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147070 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147095 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147117 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147140 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147168 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147191 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147216 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147240 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147257 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147276 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147293 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147313 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147333 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147350 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147369 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147388 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147407 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147423 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147440 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147474 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147493 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147510 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147528 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147545 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147562 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147580 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147597 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147617 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147633 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147650 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147669 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147687 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147704 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147722 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147740 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147778 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147797 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147814 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147832 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147849 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147866 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147889 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147907 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147925 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147944 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147963 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147982 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147999 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148017 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148034 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148052 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148069 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148088 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148105 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148125 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148143 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148160 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148179 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148196 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148214 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148231 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148250 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148289 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148315 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148335 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148354 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148375 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148392 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148410 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148429 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148446 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148463 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148482 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148500 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148516 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148534 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148569 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155961 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156454 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.157827 4710 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.158091 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.158876 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145118 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.163588 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145574 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145587 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.145751 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146229 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146495 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146608 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146788 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146859 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.146947 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147087 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147303 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147532 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147751 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.147950 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148013 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.148061 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.149472 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.149492 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.149683 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.150106 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.150377 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.150406 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.150473 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.150573 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.150585 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.151180 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.151554 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.151679 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.151559 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.151776 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.151925 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.152093 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.152142 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.152174 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.152729 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.152986 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.153021 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.153079 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.153088 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.153163 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.153672 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.153719 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.154060 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.154076 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.154593 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.154688 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.154933 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.154953 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.154988 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155015 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155084 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155203 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155308 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155330 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155523 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155551 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155627 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155713 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156026 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156073 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156310 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156321 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.155538 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156537 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156685 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156785 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.156992 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.157422 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.157572 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.157645 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.158733 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.158815 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.158830 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.158971 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.159168 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.159179 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.159177 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.159225 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.159639 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.159736 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.159913 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.160068 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.160221 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.160375 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.160735 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.161208 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.161341 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.161662 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.161673 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.161900 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.162176 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.162276 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.162547 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.162808 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.163078 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164060 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.163365 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.163657 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164301 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164501 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164475 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164550 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164568 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164650 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164677 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.164734 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.165032 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.165145 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.165381 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.165489 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.165507 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.165635 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.165560 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:49.665538101 +0000 UTC m=+18.923838146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.165710 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:49.665682525 +0000 UTC m=+18.923982700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.165833 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:49.665821119 +0000 UTC m=+18.924121224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.166552 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.166721 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.166736 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.166896 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.167067 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.167329 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.166632 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.167599 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.167967 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.168095 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.168232 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.168276 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.168386 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:49.668362213 +0000 UTC m=+18.926662268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.168489 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.169325 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.169347 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.169461 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.169488 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.169522 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.169983 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.170033 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.170347 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.170714 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.172555 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.173020 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.173164 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.173278 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.173410 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.173696 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.174275 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.174310 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.174348 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.174376 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.174400 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.174418 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.174472 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:49.674455808 +0000 UTC m=+18.932756093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.174924 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.174929 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.175674 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.176486 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.176882 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.177206 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.177269 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.177329 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.177744 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.178008 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.178190 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.183093 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.185682 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.186559 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.186638 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.186802 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.187013 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.187465 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.189090 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.189303 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.189907 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.190355 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.190584 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.190611 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.195155 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.195489 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.195541 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.197719 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.198591 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.200957 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.201462 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.201498 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.201580 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.203696 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.204119 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.204158 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.204360 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.206276 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.206753 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.207122 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.210710 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.210713 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.210981 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.211175 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.211248 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.211355 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.212654 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.212797 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.213264 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.213728 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.213784 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.213894 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.213916 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.214085 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.214247 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.216054 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.220719 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.232782 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.233603 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.236205 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.241483 4710 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528" exitCode=255 Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.241540 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528"} Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.245609 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.249634 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.250302 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.250627 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251147 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251253 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251269 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251378 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251564 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251627 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251639 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251698 4710 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251709 4710 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251718 4710 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251727 4710 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251736 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251795 4710 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251826 4710 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251924 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251939 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251949 4710 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251978 4710 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251988 4710 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.251996 4710 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252004 4710 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252012 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252020 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252029 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252037 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252255 4710 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252271 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252281 4710 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252291 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252301 4710 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252311 4710 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252321 4710 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252331 4710 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252342 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252351 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252359 4710 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252366 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252376 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252385 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252396 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252407 4710 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252388 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252421 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252544 4710 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252555 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252567 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252579 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252590 4710 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252599 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252608 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252616 4710 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252623 4710 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252632 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252641 4710 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252648 4710 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252656 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252665 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252672 4710 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252681 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252689 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252697 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252706 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252716 4710 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252725 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252733 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252742 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252750 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252776 4710 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252784 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252794 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252803 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252812 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252820 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252828 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252836 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252845 4710 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252853 4710 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252862 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252871 4710 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252880 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252887 4710 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252896 4710 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252904 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252911 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252921 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252929 4710 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252937 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252945 4710 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252953 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252962 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252970 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252979 4710 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252987 4710 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.252996 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253005 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253012 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253021 4710 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253029 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253038 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253048 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253056 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253065 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253073 4710 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253081 4710 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253089 4710 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253097 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253105 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253115 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253123 4710 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253131 4710 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253139 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253147 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253551 4710 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253562 4710 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253570 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253578 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253587 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253595 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253603 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253611 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253619 4710 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253628 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253635 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253664 4710 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253673 4710 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253682 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253691 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253701 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253711 4710 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253720 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253728 4710 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253739 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253748 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253776 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253785 4710 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253795 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253803 4710 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253811 4710 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253820 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253829 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253837 4710 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253845 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253853 4710 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253861 4710 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253870 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253879 4710 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253889 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253898 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253908 4710 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253917 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253926 4710 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253934 4710 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253941 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253950 4710 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253958 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253968 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253979 4710 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.253991 4710 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254002 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254012 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254029 4710 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254041 4710 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254052 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254063 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254073 4710 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254084 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254095 4710 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254104 4710 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254113 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254122 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254131 4710 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254139 4710 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254148 4710 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254158 4710 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254167 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254175 4710 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254185 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254194 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254204 4710 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254213 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254222 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254230 4710 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254240 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254449 4710 scope.go:117] "RemoveContainer" containerID="6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254582 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254626 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254885 4710 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254920 4710 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254936 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254949 4710 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.254961 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.256051 4710 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.256070 4710 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.256085 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.256097 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.256110 4710 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.256121 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.256133 4710 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.260906 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.273924 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.284002 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.296367 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.306885 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.396066 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.404060 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.416299 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 28 16:58:49 crc kubenswrapper[4710]: W1128 16:58:49.426936 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-f6277b2d93b4b73748658bc8c1824ac22a1bdf65a3d5ff7403a6fe5088f093fc WatchSource:0}: Error finding container f6277b2d93b4b73748658bc8c1824ac22a1bdf65a3d5ff7403a6fe5088f093fc: Status 404 returned error can't find the container with id f6277b2d93b4b73748658bc8c1824ac22a1bdf65a3d5ff7403a6fe5088f093fc Nov 28 16:58:49 crc kubenswrapper[4710]: W1128 16:58:49.428612 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-feb50b525c8e3c2a88deeb02ac29f49522a59bfe178206a750f27d0640cde436 WatchSource:0}: Error finding container feb50b525c8e3c2a88deeb02ac29f49522a59bfe178206a750f27d0640cde436: Status 404 returned error can't find the container with id feb50b525c8e3c2a88deeb02ac29f49522a59bfe178206a750f27d0640cde436 Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.760347 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.760644 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.760669 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.760687 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761089 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761163 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761100 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761241 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761257 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761272 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761292 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:50.761269531 +0000 UTC m=+20.019569576 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761313 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761320 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:50.761307222 +0000 UTC m=+20.019607267 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761365 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:50.761333133 +0000 UTC m=+20.019633268 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761240 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.761376 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.761394 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:50.761386764 +0000 UTC m=+20.019686809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:49 crc kubenswrapper[4710]: E1128 16:58:49.762219 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:50.762183937 +0000 UTC m=+20.020484162 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.873583 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.893912 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.910621 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.928793 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.946170 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.964087 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.981472 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:49 crc kubenswrapper[4710]: I1128 16:58:49.997029 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:49Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.246549 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"feb50b525c8e3c2a88deeb02ac29f49522a59bfe178206a750f27d0640cde436"} Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.248731 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d"} Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.248778 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e"} Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.248789 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f6277b2d93b4b73748658bc8c1824ac22a1bdf65a3d5ff7403a6fe5088f093fc"} Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.250997 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.254015 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1"} Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.254256 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.255828 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7"} Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.255880 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"db04f30edf7024dc5396722b27292131387e3243b0d0a9564bee8e2117c6eb7f"} Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.259581 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.282733 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.298186 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.312960 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.326607 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.342616 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.360914 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.375999 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.398453 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.412560 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.426624 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.448155 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.470261 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.484364 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.499481 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:50Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.777530 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.777713 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:52.777682292 +0000 UTC m=+22.035982337 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.777920 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.777970 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.778004 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:50 crc kubenswrapper[4710]: I1128 16:58:50.778050 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778197 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778191 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778220 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778219 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778317 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778296 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:52.77827505 +0000 UTC m=+22.036575105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778383 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:52.778363112 +0000 UTC m=+22.036663257 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778404 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:52.778395433 +0000 UTC m=+22.036695598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778483 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778545 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778604 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:50 crc kubenswrapper[4710]: E1128 16:58:50.778688 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:52.778678482 +0000 UTC m=+22.036978607 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.141563 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.141642 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:51 crc kubenswrapper[4710]: E1128 16:58:51.141817 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:51 crc kubenswrapper[4710]: E1128 16:58:51.141990 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.142437 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:51 crc kubenswrapper[4710]: E1128 16:58:51.142664 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.145734 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.146474 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.147662 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.148478 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.149676 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.150288 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.150977 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.152020 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.152725 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.153727 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.154370 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.155841 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.156461 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.156715 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.157217 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.158215 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.158837 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.159790 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.160300 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.160924 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.161958 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.162493 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.163616 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.164295 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.165346 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.165885 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.166621 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.167902 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.168467 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.169358 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.169951 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.170493 4710 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.170611 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.171754 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.172232 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.172704 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.173151 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.174275 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.174982 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.175471 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.176102 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.176797 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.177298 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.177894 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.181236 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.181926 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.182714 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.183295 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.184269 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.185022 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.185368 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.185849 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.186393 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.187492 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.188161 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.188718 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.189672 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.199312 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.211883 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.223859 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:51 crc kubenswrapper[4710]: I1128 16:58:51.236936 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.143985 4710 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.145846 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.145885 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.145895 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.145956 4710 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.154094 4710 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.154591 4710 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.157710 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.157785 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.157799 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.157821 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.157838 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.174307 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.178202 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.178247 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.178258 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.178274 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.178286 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.195701 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.200008 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.200061 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.200073 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.200090 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.200103 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.217958 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.223611 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.223664 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.223679 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.223697 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.223710 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.240530 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.244070 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.244105 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.244116 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.244129 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.244140 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.261246 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2"} Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.262112 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.262233 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.263850 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.263892 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.263907 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.263928 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.263946 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.283184 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.297200 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.311909 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.323892 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.336791 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.354031 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.366298 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.366390 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.366403 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.366421 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.366434 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.376968 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:52Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.469332 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.469385 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.469402 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.469420 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.469434 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.572896 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.572945 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.572954 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.572970 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.572979 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.675833 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.675883 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.675895 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.675913 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.675925 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.778646 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.778719 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.778736 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.778801 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.778820 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.797247 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.797364 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.797433 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797487 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:56.7974526 +0000 UTC m=+26.055752675 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.797549 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.797624 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797657 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797690 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797710 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797723 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797808 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797868 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797811 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797866 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:56.79783489 +0000 UTC m=+26.056134975 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797965 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:56.797952894 +0000 UTC m=+26.056253009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797984 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:56.797974044 +0000 UTC m=+26.056274199 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.797888 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:52 crc kubenswrapper[4710]: E1128 16:58:52.798043 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:56.798035606 +0000 UTC m=+26.056335761 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.881456 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.881495 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.881506 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.881523 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.881535 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.985210 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.985260 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.985273 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.985294 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:52 crc kubenswrapper[4710]: I1128 16:58:52.985308 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:52Z","lastTransitionTime":"2025-11-28T16:58:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.088320 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.088374 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.088384 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.088408 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.088422 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.140896 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.140969 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.140938 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:53 crc kubenswrapper[4710]: E1128 16:58:53.141125 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:53 crc kubenswrapper[4710]: E1128 16:58:53.141234 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:53 crc kubenswrapper[4710]: E1128 16:58:53.141347 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.191861 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.191932 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.191952 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.191978 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.192012 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.296550 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.296611 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.296628 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.296655 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.296673 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.399643 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.399721 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.399739 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.399794 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.399814 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.503273 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.503351 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.503375 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.503405 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.503426 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.609708 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.609797 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.609814 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.609837 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.609864 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.713342 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.713399 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.713417 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.713443 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.713462 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.816617 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.816714 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.816733 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.816802 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.816824 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.919702 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.919791 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.919812 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.919835 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:53 crc kubenswrapper[4710]: I1128 16:58:53.919853 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:53Z","lastTransitionTime":"2025-11-28T16:58:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.023027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.023129 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.023162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.023189 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.023207 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.126070 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.126114 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.126128 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.126152 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.126168 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.229049 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.229089 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.229100 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.229117 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.229131 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.331648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.331728 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.331794 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.331831 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.331854 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.434822 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.434896 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.434921 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.434948 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.434969 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.537690 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.537720 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.537730 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.537745 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.537772 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.641206 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.641279 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.641301 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.641334 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.641364 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.743936 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.743989 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.744003 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.744021 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.744032 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.810065 4710 csr.go:261] certificate signing request csr-7dkbf is approved, waiting to be issued Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.832632 4710 csr.go:257] certificate signing request csr-7dkbf is issued Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.846339 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.846624 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.846691 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.846781 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.846841 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.948972 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.949007 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.949015 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.949030 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:54 crc kubenswrapper[4710]: I1128 16:58:54.949040 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:54Z","lastTransitionTime":"2025-11-28T16:58:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.050726 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.050785 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.050796 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.050811 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.050819 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.140967 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.140995 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.141016 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.141142 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.141182 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.141231 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.152512 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.152547 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.152556 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.152571 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.152581 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.254673 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.255254 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.255339 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.255414 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.255503 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.358291 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.358574 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.358682 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.358748 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.358833 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.395782 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.399520 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.410136 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.421348 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.427445 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.438869 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.457656 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.461341 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.461493 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.461569 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.461677 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.461770 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.472505 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.487551 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.501996 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.517519 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.533376 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.548880 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.564349 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.564396 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.564407 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.564423 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.564437 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.566291 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.582366 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.601282 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.614609 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.635694 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.666868 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.666915 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.666926 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.666942 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.666957 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.716908 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mhrhv"] Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.717268 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2j8nb"] Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.717395 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mhrhv" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.717582 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.719115 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-9mscc"] Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.719575 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.721026 4710 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.721067 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726706 4710 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.726774 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726716 4710 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.726803 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726836 4710 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.726857 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726838 4710 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726878 4710 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.726878 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.726889 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726893 4710 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726905 4710 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726923 4710 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.726934 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.726973 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.726971 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726934 4710 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726923 4710 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.727004 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726923 4710 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.727013 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: W1128 16:58:55.726936 4710 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.727025 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: E1128 16:58:55.727040 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.727289 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-t4jqb"] Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.727913 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.732778 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.732810 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.755077 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.768693 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.768726 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.768734 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.768747 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.768769 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.771833 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.782019 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.807255 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.822179 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826073 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-kubelet\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826112 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9x8\" (UniqueName: \"kubernetes.io/projected/ac18a0af-e029-40a2-a035-963326dd8738-kube-api-access-wc9x8\") pod \"node-resolver-mhrhv\" (UID: \"ac18a0af-e029-40a2-a035-963326dd8738\") " pod="openshift-dns/node-resolver-mhrhv" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826129 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-cni-bin\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826143 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-cni-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826161 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-k8s-cni-cncf-io\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826180 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-conf-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826197 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpvcq\" (UniqueName: \"kubernetes.io/projected/4ca87069-1d78-4e20-ba15-f37acec7135b-kube-api-access-bpvcq\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826212 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5x7h\" (UniqueName: \"kubernetes.io/projected/b2ae360a-eba6-4e76-9942-83f5c21f3877-kube-api-access-n5x7h\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826279 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-cnibin\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826329 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-cni-binary-copy\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826347 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-cni-multus\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826376 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4ca87069-1d78-4e20-ba15-f37acec7135b-rootfs\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826391 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ca87069-1d78-4e20-ba15-f37acec7135b-proxy-tls\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826405 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ca87069-1d78-4e20-ba15-f37acec7135b-mcd-auth-proxy-config\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826420 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ac18a0af-e029-40a2-a035-963326dd8738-hosts-file\") pod \"node-resolver-mhrhv\" (UID: \"ac18a0af-e029-40a2-a035-963326dd8738\") " pod="openshift-dns/node-resolver-mhrhv" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826435 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-os-release\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826447 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-netns\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826493 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-system-cni-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826528 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-socket-dir-parent\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826562 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-hostroot\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826600 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-multus-certs\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826620 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-daemon-config\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.826635 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-etc-kubernetes\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.833801 4710 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-28 16:53:54 +0000 UTC, rotation deadline is 2026-10-01 20:45:52.653066832 +0000 UTC Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.833858 4710 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7371h46m56.819212688s for next certificate rotation Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.837615 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.850856 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.866178 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.870782 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.870825 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.870836 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.870853 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.870866 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.880025 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.890498 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.900556 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.912135 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.921699 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927816 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-cni-bin\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927849 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-kubelet\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927876 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-os-release\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927901 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc9x8\" (UniqueName: \"kubernetes.io/projected/ac18a0af-e029-40a2-a035-963326dd8738-kube-api-access-wc9x8\") pod \"node-resolver-mhrhv\" (UID: \"ac18a0af-e029-40a2-a035-963326dd8738\") " pod="openshift-dns/node-resolver-mhrhv" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927923 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-system-cni-dir\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927954 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-cni-binary-copy\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927980 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-cni-multus\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927998 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ca87069-1d78-4e20-ba15-f37acec7135b-proxy-tls\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928017 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ca87069-1d78-4e20-ba15-f37acec7135b-mcd-auth-proxy-config\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928032 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ac18a0af-e029-40a2-a035-963326dd8738-hosts-file\") pod \"node-resolver-mhrhv\" (UID: \"ac18a0af-e029-40a2-a035-963326dd8738\") " pod="openshift-dns/node-resolver-mhrhv" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928049 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-netns\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928067 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-socket-dir-parent\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928085 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-etc-kubernetes\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928114 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f7bc0ce-8cd7-457d-8194-69354145dccc-cni-binary-copy\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928133 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpvcq\" (UniqueName: \"kubernetes.io/projected/4ca87069-1d78-4e20-ba15-f37acec7135b-kube-api-access-bpvcq\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928151 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-cni-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928165 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-k8s-cni-cncf-io\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928179 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-conf-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928194 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-cnibin\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928216 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2fth\" (UniqueName: \"kubernetes.io/projected/4f7bc0ce-8cd7-457d-8194-69354145dccc-kube-api-access-q2fth\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928247 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5x7h\" (UniqueName: \"kubernetes.io/projected/b2ae360a-eba6-4e76-9942-83f5c21f3877-kube-api-access-n5x7h\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928275 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f7bc0ce-8cd7-457d-8194-69354145dccc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928302 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-cnibin\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928335 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-os-release\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928357 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4ca87069-1d78-4e20-ba15-f37acec7135b-rootfs\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928388 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-system-cni-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928400 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-kubelet\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928408 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-hostroot\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928459 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-hostroot\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928498 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-multus-certs\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928515 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4ca87069-1d78-4e20-ba15-f37acec7135b-rootfs\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928540 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928573 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-daemon-config\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928622 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-conf-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928661 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-system-cni-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.927988 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-cni-bin\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928710 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-netns\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928722 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-k8s-cni-cncf-io\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928720 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-etc-kubernetes\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928748 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-run-multus-certs\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928777 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ac18a0af-e029-40a2-a035-963326dd8738-hosts-file\") pod \"node-resolver-mhrhv\" (UID: \"ac18a0af-e029-40a2-a035-963326dd8738\") " pod="openshift-dns/node-resolver-mhrhv" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928748 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-host-var-lib-cni-multus\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928847 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-cnibin\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928822 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-socket-dir-parent\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928948 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-os-release\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.928952 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-cni-dir\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.933183 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.947476 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.959554 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.970700 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.973498 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.973541 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.973552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.973568 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.973579 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:55Z","lastTransitionTime":"2025-11-28T16:58:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.982346 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:55 crc kubenswrapper[4710]: I1128 16:58:55.992325 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:55Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.004012 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.014098 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029236 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f7bc0ce-8cd7-457d-8194-69354145dccc-cni-binary-copy\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029303 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fth\" (UniqueName: \"kubernetes.io/projected/4f7bc0ce-8cd7-457d-8194-69354145dccc-kube-api-access-q2fth\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029357 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-cnibin\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029408 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f7bc0ce-8cd7-457d-8194-69354145dccc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029475 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029526 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-os-release\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029559 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-system-cni-dir\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029520 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-cnibin\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029650 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-os-release\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029692 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-system-cni-dir\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.029877 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f7bc0ce-8cd7-457d-8194-69354145dccc-tuning-conf-dir\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.030330 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f7bc0ce-8cd7-457d-8194-69354145dccc-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.075865 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.075905 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.075913 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.075931 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.075943 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.089304 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzbq9"] Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.091131 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.092788 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.093493 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.093544 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.093809 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.093971 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.095217 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.095795 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.105075 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.118505 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.130696 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-netns\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.130774 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.130849 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-env-overrides\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.130915 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-etc-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.130941 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.130975 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-netd\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131088 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-slash\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131143 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovn-node-metrics-cert\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131162 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-script-lib\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131177 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-systemd-units\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131199 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-kubelet\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131215 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-systemd\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131305 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-ovn\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131339 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-node-log\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131371 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-log-socket\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131424 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-bin\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131456 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-config\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131497 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-var-lib-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131520 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.131564 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pzd6\" (UniqueName: \"kubernetes.io/projected/bcf34ad7-9bed-49eb-ad10-20bc5825292a-kube-api-access-6pzd6\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.133831 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.156316 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.169003 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.178356 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.178413 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.178424 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.178441 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.178452 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.180507 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.192650 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.204605 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.217305 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.230050 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232483 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovn-node-metrics-cert\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232515 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-script-lib\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232543 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-systemd-units\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232576 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-kubelet\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232622 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-systemd\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232665 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-ovn\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232690 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-node-log\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232720 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-log-socket\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232743 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-bin\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232747 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-kubelet\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232781 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-systemd-units\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232831 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-systemd\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232869 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-ovn\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232910 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-log-socket\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232924 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-node-log\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232785 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-config\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232995 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-var-lib-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.232948 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-bin\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233022 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233058 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pzd6\" (UniqueName: \"kubernetes.io/projected/bcf34ad7-9bed-49eb-ad10-20bc5825292a-kube-api-access-6pzd6\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233067 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-var-lib-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233106 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233123 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-netns\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233153 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233179 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-env-overrides\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233222 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233246 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-etc-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233271 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-netd\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233319 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-slash\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233408 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-slash\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233443 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233471 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-etc-openvswitch\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233500 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-netd\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233530 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-netns\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233559 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233697 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-script-lib\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233824 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-env-overrides\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.233831 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-config\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.238521 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovn-node-metrics-cert\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.248356 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.251985 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pzd6\" (UniqueName: \"kubernetes.io/projected/bcf34ad7-9bed-49eb-ad10-20bc5825292a-kube-api-access-6pzd6\") pod \"ovnkube-node-mzbq9\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.263286 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.273576 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.277189 4710 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.280937 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.280968 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.280977 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.280991 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.281000 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.383849 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.383952 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.383972 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.383998 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.384015 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.407510 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:58:56 crc kubenswrapper[4710]: W1128 16:58:56.419953 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcf34ad7_9bed_49eb_ad10_20bc5825292a.slice/crio-1f95ef1a130a6db1354044f3cddb37e9f50f871760b4165713bb1a8370ad3de0 WatchSource:0}: Error finding container 1f95ef1a130a6db1354044f3cddb37e9f50f871760b4165713bb1a8370ad3de0: Status 404 returned error can't find the container with id 1f95ef1a130a6db1354044f3cddb37e9f50f871760b4165713bb1a8370ad3de0 Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.487749 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.487888 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.487911 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.487943 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.487963 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.553508 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.591585 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.591642 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.591658 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.591677 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.591693 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.606203 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.610570 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-cni-binary-copy\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.610786 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f7bc0ce-8cd7-457d-8194-69354145dccc-cni-binary-copy\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.694951 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.695014 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.695032 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.695057 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.695075 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.732708 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.739517 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ca87069-1d78-4e20-ba15-f37acec7135b-mcd-auth-proxy-config\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.775108 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.780599 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.793082 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc9x8\" (UniqueName: \"kubernetes.io/projected/ac18a0af-e029-40a2-a035-963326dd8738-kube-api-access-wc9x8\") pod \"node-resolver-mhrhv\" (UID: \"ac18a0af-e029-40a2-a035-963326dd8738\") " pod="openshift-dns/node-resolver-mhrhv" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.797321 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.797361 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.797378 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.797401 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.797419 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.803375 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.838885 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.839054 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.839138 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839208 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839229 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:59:04.839189207 +0000 UTC m=+34.097489292 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839253 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839275 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839274 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:04.839259539 +0000 UTC m=+34.097559614 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839288 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839330 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:04.83931476 +0000 UTC m=+34.097614805 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.839368 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.839458 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839465 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839576 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:04.839563167 +0000 UTC m=+34.097863242 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839509 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839623 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839644 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.839697 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:04.839685341 +0000 UTC m=+34.097985426 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.900551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.900598 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.900610 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.900626 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.900635 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:56Z","lastTransitionTime":"2025-11-28T16:58:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.929199 4710 secret.go:188] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.929276 4710 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.929345 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ca87069-1d78-4e20-ba15-f37acec7135b-proxy-tls podName:4ca87069-1d78-4e20-ba15-f37acec7135b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:57.429318266 +0000 UTC m=+26.687618311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/4ca87069-1d78-4e20-ba15-f37acec7135b-proxy-tls") pod "machine-config-daemon-9mscc" (UID: "4ca87069-1d78-4e20-ba15-f37acec7135b") : failed to sync secret cache: timed out waiting for the condition Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.929378 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-daemon-config podName:b2ae360a-eba6-4e76-9942-83f5c21f3877 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:57.429355487 +0000 UTC m=+26.687655612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-daemon-config") pod "multus-2j8nb" (UID: "b2ae360a-eba6-4e76-9942-83f5c21f3877") : failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.942582 4710 projected.go:288] Couldn't get configMap openshift-machine-config-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.942651 4710 projected.go:194] Error preparing data for projected volume kube-api-access-bpvcq for pod openshift-machine-config-operator/machine-config-daemon-9mscc: failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.942779 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4ca87069-1d78-4e20-ba15-f37acec7135b-kube-api-access-bpvcq podName:4ca87069-1d78-4e20-ba15-f37acec7135b nodeName:}" failed. No retries permitted until 2025-11-28 16:58:57.442734382 +0000 UTC m=+26.701034477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bpvcq" (UniqueName: "kubernetes.io/projected/4ca87069-1d78-4e20-ba15-f37acec7135b-kube-api-access-bpvcq") pod "machine-config-daemon-9mscc" (UID: "4ca87069-1d78-4e20-ba15-f37acec7135b") : failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:56 crc kubenswrapper[4710]: E1128 16:58:56.944682 4710 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.956119 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 16:58:56 crc kubenswrapper[4710]: I1128 16:58:56.967700 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.003244 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.003302 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.003326 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.003347 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.003358 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.032944 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.041834 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mhrhv" Nov 28 16:58:57 crc kubenswrapper[4710]: E1128 16:58:57.041932 4710 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:57 crc kubenswrapper[4710]: W1128 16:58:57.052171 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac18a0af_e029_40a2_a035_963326dd8738.slice/crio-921f3aaf7f954d6e41de4388a5f2a09007d7dc9ec3ee5824cb27d0e254e590df WatchSource:0}: Error finding container 921f3aaf7f954d6e41de4388a5f2a09007d7dc9ec3ee5824cb27d0e254e590df: Status 404 returned error can't find the container with id 921f3aaf7f954d6e41de4388a5f2a09007d7dc9ec3ee5824cb27d0e254e590df Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.102876 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.106789 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.107071 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.107079 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.107094 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.107104 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.141309 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:57 crc kubenswrapper[4710]: E1128 16:58:57.141473 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.141803 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.141809 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:57 crc kubenswrapper[4710]: E1128 16:58:57.141879 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:57 crc kubenswrapper[4710]: E1128 16:58:57.141994 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.209040 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.209078 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.209093 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.209120 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.209132 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.248370 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.276913 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mhrhv" event={"ID":"ac18a0af-e029-40a2-a035-963326dd8738","Type":"ContainerStarted","Data":"24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.276971 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mhrhv" event={"ID":"ac18a0af-e029-40a2-a035-963326dd8738","Type":"ContainerStarted","Data":"921f3aaf7f954d6e41de4388a5f2a09007d7dc9ec3ee5824cb27d0e254e590df"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.278954 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f" exitCode=0 Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.279024 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.279077 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"1f95ef1a130a6db1354044f3cddb37e9f50f871760b4165713bb1a8370ad3de0"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.285421 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 16:58:57 crc kubenswrapper[4710]: E1128 16:58:57.292547 4710 projected.go:194] Error preparing data for projected volume kube-api-access-q2fth for pod openshift-multus/multus-additional-cni-plugins-t4jqb: failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:57 crc kubenswrapper[4710]: E1128 16:58:57.292640 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4f7bc0ce-8cd7-457d-8194-69354145dccc-kube-api-access-q2fth podName:4f7bc0ce-8cd7-457d-8194-69354145dccc nodeName:}" failed. No retries permitted until 2025-11-28 16:58:57.792618432 +0000 UTC m=+27.050918477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q2fth" (UniqueName: "kubernetes.io/projected/4f7bc0ce-8cd7-457d-8194-69354145dccc-kube-api-access-q2fth") pod "multus-additional-cni-plugins-t4jqb" (UID: "4f7bc0ce-8cd7-457d-8194-69354145dccc") : failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:57 crc kubenswrapper[4710]: E1128 16:58:57.296982 4710 projected.go:194] Error preparing data for projected volume kube-api-access-n5x7h for pod openshift-multus/multus-2j8nb: failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:57 crc kubenswrapper[4710]: E1128 16:58:57.297057 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b2ae360a-eba6-4e76-9942-83f5c21f3877-kube-api-access-n5x7h podName:b2ae360a-eba6-4e76-9942-83f5c21f3877 nodeName:}" failed. No retries permitted until 2025-11-28 16:58:57.797034619 +0000 UTC m=+27.055334664 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n5x7h" (UniqueName: "kubernetes.io/projected/b2ae360a-eba6-4e76-9942-83f5c21f3877-kube-api-access-n5x7h") pod "multus-2j8nb" (UID: "b2ae360a-eba6-4e76-9942-83f5c21f3877") : failed to sync configmap cache: timed out waiting for the condition Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.298992 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.311717 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.311787 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.311800 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.311819 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.311830 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.312520 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.323055 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.324867 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.335827 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.349262 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.362353 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.377364 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.390483 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.405305 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.413818 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.413864 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.413875 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.413891 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.413901 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.417620 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.430950 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.441567 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.448420 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-daemon-config\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.448496 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ca87069-1d78-4e20-ba15-f37acec7135b-proxy-tls\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.448537 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpvcq\" (UniqueName: \"kubernetes.io/projected/4ca87069-1d78-4e20-ba15-f37acec7135b-kube-api-access-bpvcq\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.449079 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/b2ae360a-eba6-4e76-9942-83f5c21f3877-multus-daemon-config\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.452400 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4ca87069-1d78-4e20-ba15-f37acec7135b-proxy-tls\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.453088 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.460668 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpvcq\" (UniqueName: \"kubernetes.io/projected/4ca87069-1d78-4e20-ba15-f37acec7135b-kube-api-access-bpvcq\") pod \"machine-config-daemon-9mscc\" (UID: \"4ca87069-1d78-4e20-ba15-f37acec7135b\") " pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.467925 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.482579 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.497556 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.515632 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.516488 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.516524 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.516536 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.516552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.516563 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.529462 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.543478 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.543554 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.555150 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: W1128 16:58:57.566742 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ca87069_1d78_4e20_ba15_f37acec7135b.slice/crio-24208f834122d502174c9a6183d13a65aafb2ac72e655f39880cfaea4dff6b11 WatchSource:0}: Error finding container 24208f834122d502174c9a6183d13a65aafb2ac72e655f39880cfaea4dff6b11: Status 404 returned error can't find the container with id 24208f834122d502174c9a6183d13a65aafb2ac72e655f39880cfaea4dff6b11 Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.571933 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.591871 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.603916 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.617234 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.618916 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.618946 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.618955 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.618968 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.618977 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.628581 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.641810 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.720824 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.720880 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.720895 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.720913 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.721299 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.824293 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.824342 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.824355 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.824379 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.824392 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.852502 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2fth\" (UniqueName: \"kubernetes.io/projected/4f7bc0ce-8cd7-457d-8194-69354145dccc-kube-api-access-q2fth\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.852553 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5x7h\" (UniqueName: \"kubernetes.io/projected/b2ae360a-eba6-4e76-9942-83f5c21f3877-kube-api-access-n5x7h\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.857525 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2fth\" (UniqueName: \"kubernetes.io/projected/4f7bc0ce-8cd7-457d-8194-69354145dccc-kube-api-access-q2fth\") pod \"multus-additional-cni-plugins-t4jqb\" (UID: \"4f7bc0ce-8cd7-457d-8194-69354145dccc\") " pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.857626 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5x7h\" (UniqueName: \"kubernetes.io/projected/b2ae360a-eba6-4e76-9942-83f5c21f3877-kube-api-access-n5x7h\") pod \"multus-2j8nb\" (UID: \"b2ae360a-eba6-4e76-9942-83f5c21f3877\") " pod="openshift-multus/multus-2j8nb" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.926950 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.926985 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.926995 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.927009 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:57 crc kubenswrapper[4710]: I1128 16:58:57.927021 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:57Z","lastTransitionTime":"2025-11-28T16:58:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.029971 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.030021 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.030030 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.030047 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.030058 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.133056 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.133104 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.133116 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.133134 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.133150 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.137288 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2j8nb" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.149491 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.235679 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.235727 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.235739 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.235766 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.235778 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.272690 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-26vk7"] Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.273127 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.275272 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.275272 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.275387 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.275394 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.283765 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.283814 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.283825 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"24208f834122d502174c9a6183d13a65aafb2ac72e655f39880cfaea4dff6b11"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.287302 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.287351 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.287368 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.287381 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.287392 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.287403 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.292360 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.305567 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: W1128 16:58:58.307974 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7bc0ce_8cd7_457d_8194_69354145dccc.slice/crio-35a9b038618389ef84799b04b17cfc29c925d83a8e81ae8041314260bf7969d4 WatchSource:0}: Error finding container 35a9b038618389ef84799b04b17cfc29c925d83a8e81ae8041314260bf7969d4: Status 404 returned error can't find the container with id 35a9b038618389ef84799b04b17cfc29c925d83a8e81ae8041314260bf7969d4 Nov 28 16:58:58 crc kubenswrapper[4710]: W1128 16:58:58.309044 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2ae360a_eba6_4e76_9942_83f5c21f3877.slice/crio-060683505c139af1071920f2662fa210a7f2441540a2bba3423d2b3fcbffe4c5 WatchSource:0}: Error finding container 060683505c139af1071920f2662fa210a7f2441540a2bba3423d2b3fcbffe4c5: Status 404 returned error can't find the container with id 060683505c139af1071920f2662fa210a7f2441540a2bba3423d2b3fcbffe4c5 Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.327294 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.337512 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.337539 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.337547 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.337560 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.337569 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.343123 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.353910 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.361692 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31090e53-e553-42e8-a168-4e601ae0ccf0-host\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.361788 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/31090e53-e553-42e8-a168-4e601ae0ccf0-serviceca\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.362396 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhc4j\" (UniqueName: \"kubernetes.io/projected/31090e53-e553-42e8-a168-4e601ae0ccf0-kube-api-access-mhc4j\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.366960 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.383045 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.399082 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.410688 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.420960 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.432729 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.440965 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.441006 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.441019 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.441033 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.441043 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.445346 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.456508 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.463024 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhc4j\" (UniqueName: \"kubernetes.io/projected/31090e53-e553-42e8-a168-4e601ae0ccf0-kube-api-access-mhc4j\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.463060 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31090e53-e553-42e8-a168-4e601ae0ccf0-host\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.463088 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/31090e53-e553-42e8-a168-4e601ae0ccf0-serviceca\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.463152 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/31090e53-e553-42e8-a168-4e601ae0ccf0-host\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.466087 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/31090e53-e553-42e8-a168-4e601ae0ccf0-serviceca\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.475177 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.484079 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhc4j\" (UniqueName: \"kubernetes.io/projected/31090e53-e553-42e8-a168-4e601ae0ccf0-kube-api-access-mhc4j\") pod \"node-ca-26vk7\" (UID: \"31090e53-e553-42e8-a168-4e601ae0ccf0\") " pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.488990 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.501031 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.511015 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.524151 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.537117 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.543586 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.543677 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.543689 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.543706 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.543717 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.552448 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.566363 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.577731 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.586102 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.596530 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.606238 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.618749 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.635030 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.646434 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.646502 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.646528 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.646557 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.646580 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.656721 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.749950 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.750027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.750045 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.750075 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.750097 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.806381 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-26vk7" Nov 28 16:58:58 crc kubenswrapper[4710]: W1128 16:58:58.822989 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31090e53_e553_42e8_a168_4e601ae0ccf0.slice/crio-1e8ace7c7f0c7c187000476d842ccc6740d7f5c685679f2922fcb2e92c6850f7 WatchSource:0}: Error finding container 1e8ace7c7f0c7c187000476d842ccc6740d7f5c685679f2922fcb2e92c6850f7: Status 404 returned error can't find the container with id 1e8ace7c7f0c7c187000476d842ccc6740d7f5c685679f2922fcb2e92c6850f7 Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.854696 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.855165 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.855181 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.855211 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.855224 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.958912 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.958947 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.958955 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.958968 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:58 crc kubenswrapper[4710]: I1128 16:58:58.958978 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:58Z","lastTransitionTime":"2025-11-28T16:58:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.063267 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.063298 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.063308 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.063321 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.063331 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.141250 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.141335 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:58:59 crc kubenswrapper[4710]: E1128 16:58:59.141438 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:58:59 crc kubenswrapper[4710]: E1128 16:58:59.141526 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.141733 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:58:59 crc kubenswrapper[4710]: E1128 16:58:59.141857 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.165515 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.165566 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.165578 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.165597 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.165609 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.268539 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.268592 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.268605 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.268622 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.268634 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.293109 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-26vk7" event={"ID":"31090e53-e553-42e8-a168-4e601ae0ccf0","Type":"ContainerStarted","Data":"b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.293186 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-26vk7" event={"ID":"31090e53-e553-42e8-a168-4e601ae0ccf0","Type":"ContainerStarted","Data":"1e8ace7c7f0c7c187000476d842ccc6740d7f5c685679f2922fcb2e92c6850f7"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.295438 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2j8nb" event={"ID":"b2ae360a-eba6-4e76-9942-83f5c21f3877","Type":"ContainerStarted","Data":"464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.295499 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2j8nb" event={"ID":"b2ae360a-eba6-4e76-9942-83f5c21f3877","Type":"ContainerStarted","Data":"060683505c139af1071920f2662fa210a7f2441540a2bba3423d2b3fcbffe4c5"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.299530 4710 generic.go:334] "Generic (PLEG): container finished" podID="4f7bc0ce-8cd7-457d-8194-69354145dccc" containerID="3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d" exitCode=0 Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.299615 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" event={"ID":"4f7bc0ce-8cd7-457d-8194-69354145dccc","Type":"ContainerDied","Data":"3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.299730 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" event={"ID":"4f7bc0ce-8cd7-457d-8194-69354145dccc","Type":"ContainerStarted","Data":"35a9b038618389ef84799b04b17cfc29c925d83a8e81ae8041314260bf7969d4"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.320293 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.340383 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.354132 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.367955 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.370835 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.370897 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.370914 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.370942 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.370959 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.383864 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.396936 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.416488 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.429140 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.439445 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.453467 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.469821 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.473545 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.473602 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.473618 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.473637 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.473648 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.489405 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.503134 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.514045 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.526495 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.535255 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.545539 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.561528 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.576614 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.576675 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.576692 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.576716 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.576729 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.593098 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.617903 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.631954 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.649592 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.666968 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.679242 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.679293 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.679304 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.679322 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.679334 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.683267 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.696297 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.709736 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.722585 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.734005 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:58:59Z is after 2025-08-24T17:21:41Z" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.782576 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.782641 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.782654 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.782724 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.782740 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.884670 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.884974 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.884986 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.885005 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.885017 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.987819 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.987870 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.987884 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.987905 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:58:59 crc kubenswrapper[4710]: I1128 16:58:59.987918 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:58:59Z","lastTransitionTime":"2025-11-28T16:58:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.090987 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.091049 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.091062 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.091082 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.091097 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.193051 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.193087 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.193099 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.193119 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.193133 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.295944 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.295998 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.296009 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.296031 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.296046 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.308607 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.311502 4710 generic.go:334] "Generic (PLEG): container finished" podID="4f7bc0ce-8cd7-457d-8194-69354145dccc" containerID="b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6" exitCode=0 Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.311564 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" event={"ID":"4f7bc0ce-8cd7-457d-8194-69354145dccc","Type":"ContainerDied","Data":"b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.332982 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.354540 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.369495 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.384342 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.397577 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.399172 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.399216 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.399233 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.399257 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.399274 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.412280 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.426037 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.439919 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.453363 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.468935 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.472452 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.487260 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.502493 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.502539 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.502553 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.502573 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.502588 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.509861 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.525639 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.539148 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.551300 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.565880 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.581482 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.593344 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.605027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.605065 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.605077 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.605095 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.605108 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.606113 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.619141 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.632955 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.643797 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.652120 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.665498 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.681576 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.692644 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.704114 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.709820 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.709852 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.709863 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.709880 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.709893 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.715670 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:00Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.812574 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.812659 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.812681 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.812706 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.812724 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.920485 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.920843 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.920854 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.920872 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.920885 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:00Z","lastTransitionTime":"2025-11-28T16:59:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:00 crc kubenswrapper[4710]: I1128 16:59:00.995188 4710 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.023201 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.023258 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.023271 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.023319 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.023331 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.125490 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.125823 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.125911 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.125997 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.126088 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.140829 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.141224 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.141072 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:01 crc kubenswrapper[4710]: E1128 16:59:01.141402 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:01 crc kubenswrapper[4710]: E1128 16:59:01.141554 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:01 crc kubenswrapper[4710]: E1128 16:59:01.142054 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.157847 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.172168 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.186104 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.200584 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.218249 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.232599 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.232679 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.232704 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.233206 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.233281 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.235256 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.247679 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.262180 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.277325 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.298119 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.312679 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.318084 4710 generic.go:334] "Generic (PLEG): container finished" podID="4f7bc0ce-8cd7-457d-8194-69354145dccc" containerID="248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261" exitCode=0 Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.318168 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" event={"ID":"4f7bc0ce-8cd7-457d-8194-69354145dccc","Type":"ContainerDied","Data":"248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.325506 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.338003 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.338059 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.338072 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.338088 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.338494 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.341519 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.355899 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.369562 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.384529 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.398750 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.427017 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.441729 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.442058 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.442072 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.442089 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.442099 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.467948 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.511557 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.544918 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.544964 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.544974 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.544995 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.545013 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.552839 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.591258 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.626023 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.647444 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.647484 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.647499 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.647517 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.647528 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.671073 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.709772 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.748869 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.750196 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.750237 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.750246 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.750264 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.750273 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.787943 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.827304 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:01Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.853674 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.853721 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.853734 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.853767 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.853782 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.956684 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.956732 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.956743 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.956783 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:01 crc kubenswrapper[4710]: I1128 16:59:01.956795 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:01Z","lastTransitionTime":"2025-11-28T16:59:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.059798 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.059865 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.059891 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.059921 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.059943 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.164714 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.164822 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.164848 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.164875 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.164897 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.267741 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.267842 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.267888 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.267913 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.267957 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.325313 4710 generic.go:334] "Generic (PLEG): container finished" podID="4f7bc0ce-8cd7-457d-8194-69354145dccc" containerID="01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e" exitCode=0 Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.325355 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" event={"ID":"4f7bc0ce-8cd7-457d-8194-69354145dccc","Type":"ContainerDied","Data":"01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.332457 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.332873 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.342650 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.358170 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.360814 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.369786 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.370833 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.370889 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.370899 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.370915 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.370945 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.383851 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.396042 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.406218 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.417906 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.431967 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.441255 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.453489 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.466127 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.474524 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.474559 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.474569 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.474586 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.474597 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.481943 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.494676 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.503339 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.514660 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.526301 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.540485 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.554991 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.577551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.577589 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.577601 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.577620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.577632 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.586136 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.596706 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.596776 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.596807 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.596825 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.596836 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: E1128 16:59:02.607336 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.611220 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.611263 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.611274 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.611296 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.611311 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: E1128 16:59:02.622468 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.625158 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.626455 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.626499 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.626512 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.626531 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.626546 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: E1128 16:59:02.639089 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.667499 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.705250 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.745839 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.789738 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.832151 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.860560 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.860601 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.860610 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.860627 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.860637 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.868560 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: E1128 16:59:02.873193 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.876745 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.876809 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.876822 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.876840 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.876851 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: E1128 16:59:02.887946 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: E1128 16:59:02.888093 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.890026 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.890068 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.890084 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.890101 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.890117 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.905025 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.949423 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:02Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.993471 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.993529 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.993546 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.993570 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:02 crc kubenswrapper[4710]: I1128 16:59:02.993587 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:02Z","lastTransitionTime":"2025-11-28T16:59:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.096816 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.096887 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.096908 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.096935 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.096952 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.140728 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:03 crc kubenswrapper[4710]: E1128 16:59:03.140922 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.141406 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:03 crc kubenswrapper[4710]: E1128 16:59:03.141477 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.141551 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:03 crc kubenswrapper[4710]: E1128 16:59:03.141624 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.200837 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.200891 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.200900 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.200920 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.200934 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.303639 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.303790 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.303814 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.303882 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.303901 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.341493 4710 generic.go:334] "Generic (PLEG): container finished" podID="4f7bc0ce-8cd7-457d-8194-69354145dccc" containerID="5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0" exitCode=0 Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.341674 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.342414 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" event={"ID":"4f7bc0ce-8cd7-457d-8194-69354145dccc","Type":"ContainerDied","Data":"5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.342817 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.362612 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.373484 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.380146 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.395472 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.407593 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.407637 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.407649 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.407671 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.407686 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.414460 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.428111 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.444040 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.462537 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.488710 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.507936 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.510855 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.510923 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.510940 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.510966 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.510988 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.522990 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.539465 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.556841 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.573620 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.589099 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.603955 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.614780 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.614842 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.614856 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.614877 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.614890 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.621631 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.639547 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.670881 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.718629 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.718673 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.718683 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.718701 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.718713 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.736932 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.758087 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.787190 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.821140 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.821423 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.821568 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.821697 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.821877 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.831077 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.872150 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.913591 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.925089 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.925126 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.925137 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.925153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.925167 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:03Z","lastTransitionTime":"2025-11-28T16:59:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.955452 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:03 crc kubenswrapper[4710]: I1128 16:59:03.991913 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:03Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.028719 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.028804 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.028817 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.028837 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.028852 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.030610 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.067512 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.131734 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.131867 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.131885 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.131906 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.131920 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.256634 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.256670 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.256683 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.256703 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.256715 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.348750 4710 generic.go:334] "Generic (PLEG): container finished" podID="4f7bc0ce-8cd7-457d-8194-69354145dccc" containerID="0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9" exitCode=0 Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.348807 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" event={"ID":"4f7bc0ce-8cd7-457d-8194-69354145dccc","Type":"ContainerDied","Data":"0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.348948 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.360631 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.360687 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.360699 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.360720 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.360729 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.381479 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.396126 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.410219 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.426685 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.443877 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.457067 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.463532 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.463567 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.463575 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.463588 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.463598 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.471958 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.486329 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.502959 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.513527 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.527690 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.547951 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.566366 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.566878 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.566999 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.567097 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.567204 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.591440 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.628518 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:04Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.669438 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.669490 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.669502 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.669521 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.669533 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.771747 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.771799 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.771810 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.771826 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.771836 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.862641 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.862937 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:59:20.862896447 +0000 UTC m=+50.121196532 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.863186 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.863293 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.863386 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.863494 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.863421 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.863692 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:20.863680979 +0000 UTC m=+50.121981024 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.863704 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.863881 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.863968 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.863547 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.863490 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.864116 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.864128 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.864233 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:20.86405318 +0000 UTC m=+50.122353225 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.864327 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:20.864315167 +0000 UTC m=+50.122615222 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:04 crc kubenswrapper[4710]: E1128 16:59:04.864415 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:20.864405 +0000 UTC m=+50.122705045 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.875491 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.875645 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.875673 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.875744 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.875795 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.980106 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.980164 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.980181 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.980203 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:04 crc kubenswrapper[4710]: I1128 16:59:04.980230 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:04Z","lastTransitionTime":"2025-11-28T16:59:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.083741 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.083821 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.083838 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.083867 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.083885 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.141024 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.141078 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.141024 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:05 crc kubenswrapper[4710]: E1128 16:59:05.141186 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:05 crc kubenswrapper[4710]: E1128 16:59:05.141260 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:05 crc kubenswrapper[4710]: E1128 16:59:05.141326 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.186096 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.186153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.186169 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.186192 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.186221 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.289313 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.289524 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.289620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.289692 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.289752 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.361232 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" event={"ID":"4f7bc0ce-8cd7-457d-8194-69354145dccc","Type":"ContainerStarted","Data":"de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.361386 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.382451 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.391983 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.392018 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.392031 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.392050 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.392062 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.404546 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.429116 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.452383 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.467182 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.483390 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.495650 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.495688 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.495702 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.495720 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.495731 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.500942 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.518044 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.533321 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.547834 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.557866 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.568582 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.580317 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.591740 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:05Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.601808 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.601855 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.601877 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.601899 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.601916 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.704360 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.704420 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.704433 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.704451 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.704463 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.806910 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.806958 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.806968 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.806983 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.806993 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.910118 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.910166 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.910177 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.910199 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:05 crc kubenswrapper[4710]: I1128 16:59:05.910213 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:05Z","lastTransitionTime":"2025-11-28T16:59:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.012268 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.012310 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.012322 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.012340 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.012354 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.117463 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.117515 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.117525 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.117544 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.117555 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.220582 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.220630 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.220641 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.220661 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.220672 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.323450 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.323490 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.323532 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.323551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.323563 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.366010 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/0.log" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.369329 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c" exitCode=1 Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.369382 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.370366 4710 scope.go:117] "RemoveContainer" containerID="1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.385469 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.405948 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.419409 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.426678 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.426957 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.426971 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.426988 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.427000 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.438815 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.452086 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.466445 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.481965 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.496009 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.510833 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.527216 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.529373 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.529413 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.529425 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.529443 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.529458 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.543748 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"message\\\":\\\"minNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 16:59:05.229586 5983 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:05.229974 5983 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 16:59:05.229988 5983 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 16:59:05.229999 5983 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 16:59:05.230017 5983 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 16:59:05.230038 5983 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 16:59:05.230044 5983 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:05.230050 5983 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:05.230053 5983 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 16:59:05.230058 5983 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 16:59:05.230062 5983 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 16:59:05.230070 5983 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 16:59:05.230084 5983 factory.go:656] Stopping watch factory\\\\nI1128 16:59:05.230101 5983 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.558247 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.572793 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.586078 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:06Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.631752 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.631813 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.631822 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.631836 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.631846 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.734558 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.734623 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.734634 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.734651 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.734661 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.837088 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.837145 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.837162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.837185 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.837204 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.940987 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.941027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.941047 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.941069 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:06 crc kubenswrapper[4710]: I1128 16:59:06.941081 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:06Z","lastTransitionTime":"2025-11-28T16:59:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.042982 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.043023 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.043032 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.043046 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.043054 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.141443 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.141495 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:07 crc kubenswrapper[4710]: E1128 16:59:07.141601 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:07 crc kubenswrapper[4710]: E1128 16:59:07.141692 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.141820 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:07 crc kubenswrapper[4710]: E1128 16:59:07.141918 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.146011 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.146051 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.146067 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.146089 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.146103 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.249018 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.249063 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.249073 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.249090 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.249102 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.291680 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.352018 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.352091 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.352114 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.352144 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.352165 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.374044 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/0.log" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.376351 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.376868 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.392538 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.405197 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.419701 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.434947 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.449550 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.454418 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.454451 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.454462 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.454479 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.454490 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.468376 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.483199 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.499530 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.511933 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.527704 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.541074 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.556903 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.557907 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.558071 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.558097 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.558128 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.558153 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.578871 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.600054 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"message\\\":\\\"minNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 16:59:05.229586 5983 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:05.229974 5983 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 16:59:05.229988 5983 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 16:59:05.229999 5983 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 16:59:05.230017 5983 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 16:59:05.230038 5983 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 16:59:05.230044 5983 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:05.230050 5983 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:05.230053 5983 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 16:59:05.230058 5983 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 16:59:05.230062 5983 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 16:59:05.230070 5983 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 16:59:05.230084 5983 factory.go:656] Stopping watch factory\\\\nI1128 16:59:05.230101 5983 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.661246 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.661525 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.661610 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.661694 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.661884 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.765409 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.765456 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.765468 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.765482 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.765491 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.868403 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.868463 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.868479 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.868501 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.868517 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.965848 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf"] Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.966260 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.968223 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.972992 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.977506 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.977588 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.977642 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.979156 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.979444 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:07Z","lastTransitionTime":"2025-11-28T16:59:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.988599 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:07 crc kubenswrapper[4710]: I1128 16:59:07.999476 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:07Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.018795 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.034509 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.055383 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"message\\\":\\\"minNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 16:59:05.229586 5983 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:05.229974 5983 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 16:59:05.229988 5983 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 16:59:05.229999 5983 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 16:59:05.230017 5983 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 16:59:05.230038 5983 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 16:59:05.230044 5983 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:05.230050 5983 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:05.230053 5983 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 16:59:05.230058 5983 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 16:59:05.230062 5983 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 16:59:05.230070 5983 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 16:59:05.230084 5983 factory.go:656] Stopping watch factory\\\\nI1128 16:59:05.230101 5983 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.069013 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.082223 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.082415 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.082487 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.082589 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.082666 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.089002 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.094393 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e03a307f-522c-480c-be7e-3ca520c12e49-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.094476 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t66cq\" (UniqueName: \"kubernetes.io/projected/e03a307f-522c-480c-be7e-3ca520c12e49-kube-api-access-t66cq\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.094515 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e03a307f-522c-480c-be7e-3ca520c12e49-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.094597 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e03a307f-522c-480c-be7e-3ca520c12e49-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.110095 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.123970 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.143117 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.156927 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.171308 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.183731 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.185301 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.185358 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.185370 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.185387 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.185399 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.196135 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e03a307f-522c-480c-be7e-3ca520c12e49-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.196220 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e03a307f-522c-480c-be7e-3ca520c12e49-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.196243 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t66cq\" (UniqueName: \"kubernetes.io/projected/e03a307f-522c-480c-be7e-3ca520c12e49-kube-api-access-t66cq\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.196258 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e03a307f-522c-480c-be7e-3ca520c12e49-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.196923 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.197188 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e03a307f-522c-480c-be7e-3ca520c12e49-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.197271 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e03a307f-522c-480c-be7e-3ca520c12e49-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.202231 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e03a307f-522c-480c-be7e-3ca520c12e49-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.206470 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.211706 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t66cq\" (UniqueName: \"kubernetes.io/projected/e03a307f-522c-480c-be7e-3ca520c12e49-kube-api-access-t66cq\") pod \"ovnkube-control-plane-749d76644c-tktlf\" (UID: \"e03a307f-522c-480c-be7e-3ca520c12e49\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.288589 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.288646 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.288656 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.288675 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.288688 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.291893 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" Nov 28 16:59:08 crc kubenswrapper[4710]: W1128 16:59:08.305781 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode03a307f_522c_480c_be7e_3ca520c12e49.slice/crio-add93a02a703259001cd2c1dfd2b5edf2b792ea29f80d8afe146b35c125dbba9 WatchSource:0}: Error finding container add93a02a703259001cd2c1dfd2b5edf2b792ea29f80d8afe146b35c125dbba9: Status 404 returned error can't find the container with id add93a02a703259001cd2c1dfd2b5edf2b792ea29f80d8afe146b35c125dbba9 Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.380219 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" event={"ID":"e03a307f-522c-480c-be7e-3ca520c12e49","Type":"ContainerStarted","Data":"add93a02a703259001cd2c1dfd2b5edf2b792ea29f80d8afe146b35c125dbba9"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.382230 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/1.log" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.382680 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/0.log" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.385408 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e" exitCode=1 Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.385464 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.385516 4710 scope.go:117] "RemoveContainer" containerID="1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.386183 4710 scope.go:117] "RemoveContainer" containerID="2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e" Nov 28 16:59:08 crc kubenswrapper[4710]: E1128 16:59:08.386353 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.390544 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.390579 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.390591 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.390606 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.390619 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.399142 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.410162 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.425451 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.440454 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.460162 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1c3f86cec35c7dbe1c5b4e357620926f3124c872e878dd0fe33f63c36a93b19c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"message\\\":\\\"minNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1128 16:59:05.229586 5983 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:05.229974 5983 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1128 16:59:05.229988 5983 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1128 16:59:05.229999 5983 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1128 16:59:05.230017 5983 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1128 16:59:05.230038 5983 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1128 16:59:05.230044 5983 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:05.230050 5983 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:05.230053 5983 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1128 16:59:05.230058 5983 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1128 16:59:05.230062 5983 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1128 16:59:05.230070 5983 handler.go:208] Removed *v1.Node event handler 2\\\\nI1128 16:59:05.230084 5983 factory.go:656] Stopping watch factory\\\\nI1128 16:59:05.230101 5983 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.471639 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.485488 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.494047 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.494101 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.494116 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.494137 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.494148 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.497208 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.507459 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.518972 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.529557 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.541188 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.553376 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.585670 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.596282 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.596494 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.596573 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.596652 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.596818 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.601180 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:08Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.700896 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.700958 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.700973 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.700999 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.701016 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.803815 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.803858 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.803873 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.803891 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.803904 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.907098 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.907144 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.907160 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.907179 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:08 crc kubenswrapper[4710]: I1128 16:59:08.907190 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:08Z","lastTransitionTime":"2025-11-28T16:59:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.010251 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.010325 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.010346 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.010378 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.010400 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.113648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.113694 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.113708 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.113727 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.113738 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.141948 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.141962 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:09 crc kubenswrapper[4710]: E1128 16:59:09.142175 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.142218 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:09 crc kubenswrapper[4710]: E1128 16:59:09.142334 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:09 crc kubenswrapper[4710]: E1128 16:59:09.142424 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.215538 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.215575 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.215586 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.215604 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.215616 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.318181 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.318224 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.318234 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.318252 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.318266 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.390940 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/1.log" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.394858 4710 scope.go:117] "RemoveContainer" containerID="2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e" Nov 28 16:59:09 crc kubenswrapper[4710]: E1128 16:59:09.395022 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.396131 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" event={"ID":"e03a307f-522c-480c-be7e-3ca520c12e49","Type":"ContainerStarted","Data":"02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.396167 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" event={"ID":"e03a307f-522c-480c-be7e-3ca520c12e49","Type":"ContainerStarted","Data":"04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.406378 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.416284 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.420152 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.420187 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.420197 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.420214 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.420222 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.424750 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.433567 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.434399 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-pwn66"] Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.435005 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:09 crc kubenswrapper[4710]: E1128 16:59:09.435172 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.445912 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.453791 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.464234 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.477284 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.506036 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.514722 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw5cs\" (UniqueName: \"kubernetes.io/projected/a6cf6922-30b9-4011-a998-255a33c143df-kube-api-access-zw5cs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.514853 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.522792 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.522823 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.522832 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.522846 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.522855 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.523224 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.534749 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.561136 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.597335 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.616105 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw5cs\" (UniqueName: \"kubernetes.io/projected/a6cf6922-30b9-4011-a998-255a33c143df-kube-api-access-zw5cs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.616170 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:09 crc kubenswrapper[4710]: E1128 16:59:09.616319 4710 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:09 crc kubenswrapper[4710]: E1128 16:59:09.616436 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs podName:a6cf6922-30b9-4011-a998-255a33c143df nodeName:}" failed. No retries permitted until 2025-11-28 16:59:10.11641779 +0000 UTC m=+39.374717835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs") pod "network-metrics-daemon-pwn66" (UID: "a6cf6922-30b9-4011-a998-255a33c143df") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.618146 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.625162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.625366 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.625428 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.625508 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.625591 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.636143 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.637830 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw5cs\" (UniqueName: \"kubernetes.io/projected/a6cf6922-30b9-4011-a998-255a33c143df-kube-api-access-zw5cs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.650848 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.663711 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.675100 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.690528 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.703108 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.713263 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.723402 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.728314 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.728364 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.728377 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.728395 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.728409 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.736067 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.747455 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.763215 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.779816 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.800500 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.814989 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.831356 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.831399 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.831411 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.831427 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.831415 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.831440 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.844410 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.857191 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:09Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.934828 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.934876 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.934887 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.934904 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:09 crc kubenswrapper[4710]: I1128 16:59:09.934915 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:09Z","lastTransitionTime":"2025-11-28T16:59:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.037823 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.037892 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.037909 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.037934 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.037952 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.120996 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:10 crc kubenswrapper[4710]: E1128 16:59:10.121217 4710 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:10 crc kubenswrapper[4710]: E1128 16:59:10.121292 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs podName:a6cf6922-30b9-4011-a998-255a33c143df nodeName:}" failed. No retries permitted until 2025-11-28 16:59:11.121269368 +0000 UTC m=+40.379569453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs") pod "network-metrics-daemon-pwn66" (UID: "a6cf6922-30b9-4011-a998-255a33c143df") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.140897 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.140928 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.140940 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.140955 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.140967 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.243996 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.244055 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.244073 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.244099 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.244117 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.347506 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.347552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.347565 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.347583 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.347595 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.450093 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.450136 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.450144 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.450157 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.450166 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.553182 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.553645 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.553667 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.553695 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.553713 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.656731 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.656791 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.656802 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.656821 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.656834 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.759448 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.759489 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.759498 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.759516 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.759527 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.862222 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.862275 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.862286 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.862309 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.862321 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.971717 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.971783 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.971796 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.971814 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:10 crc kubenswrapper[4710]: I1128 16:59:10.971829 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:10Z","lastTransitionTime":"2025-11-28T16:59:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.074960 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.074996 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.075006 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.075023 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.075035 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.134630 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:11 crc kubenswrapper[4710]: E1128 16:59:11.134897 4710 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:11 crc kubenswrapper[4710]: E1128 16:59:11.135051 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs podName:a6cf6922-30b9-4011-a998-255a33c143df nodeName:}" failed. No retries permitted until 2025-11-28 16:59:13.135027373 +0000 UTC m=+42.393327439 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs") pod "network-metrics-daemon-pwn66" (UID: "a6cf6922-30b9-4011-a998-255a33c143df") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.140503 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:11 crc kubenswrapper[4710]: E1128 16:59:11.140678 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.140807 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:11 crc kubenswrapper[4710]: E1128 16:59:11.140893 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.140971 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:11 crc kubenswrapper[4710]: E1128 16:59:11.141047 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.141221 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:11 crc kubenswrapper[4710]: E1128 16:59:11.141325 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.157000 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.169879 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.177307 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.177337 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.177345 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.177358 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.177367 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.184547 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.200065 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.213467 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.222722 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.233557 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.242490 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.255074 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.271894 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.282963 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.283020 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.283034 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.283052 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.283066 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.298550 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.310477 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.326191 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.338071 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.350860 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.365018 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:11Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.386369 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.386657 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.386748 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.386865 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.386953 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.489245 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.489285 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.489296 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.489314 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.489331 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.592116 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.592215 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.592238 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.592266 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.592289 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.696108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.696147 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.696158 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.696175 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.696185 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.799055 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.799098 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.799109 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.799125 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.799136 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.901232 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.901270 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.901285 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.901301 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:11 crc kubenswrapper[4710]: I1128 16:59:11.901313 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:11Z","lastTransitionTime":"2025-11-28T16:59:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.004127 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.004173 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.004184 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.004200 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.004211 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.107487 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.107546 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.107557 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.107576 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.107588 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.211018 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.211059 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.211074 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.211093 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.211104 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.314953 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.315015 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.315069 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.315092 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.315106 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.417167 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.417209 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.417232 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.417248 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.417259 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.521806 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.521846 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.521857 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.521880 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.521895 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.624393 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.624460 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.624489 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.624513 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.624529 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.727089 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.727544 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.727682 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.727821 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.727925 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.831451 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.831783 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.831894 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.832085 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.832193 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.902525 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.902587 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.902600 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.902630 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.902652 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: E1128 16:59:12.918967 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:12Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.924057 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.924118 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.924131 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.924151 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.924162 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: E1128 16:59:12.941478 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:12Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.945910 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.945950 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.945963 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.945986 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.946000 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: E1128 16:59:12.959178 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:12Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.962839 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.962872 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.962880 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.962897 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.962906 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: E1128 16:59:12.977193 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:12Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.981428 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.981456 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.981464 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.981477 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.981488 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:12 crc kubenswrapper[4710]: E1128 16:59:12.995098 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:12Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:12 crc kubenswrapper[4710]: E1128 16:59:12.995218 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.996964 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.996997 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.997009 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.997029 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:12 crc kubenswrapper[4710]: I1128 16:59:12.997040 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:12Z","lastTransitionTime":"2025-11-28T16:59:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.100515 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.100841 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.100943 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.101030 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.101108 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.141314 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.141313 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.141418 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:13 crc kubenswrapper[4710]: E1128 16:59:13.142310 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.142443 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:13 crc kubenswrapper[4710]: E1128 16:59:13.142445 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:13 crc kubenswrapper[4710]: E1128 16:59:13.142670 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:13 crc kubenswrapper[4710]: E1128 16:59:13.142963 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.156661 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:13 crc kubenswrapper[4710]: E1128 16:59:13.156941 4710 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:13 crc kubenswrapper[4710]: E1128 16:59:13.157063 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs podName:a6cf6922-30b9-4011-a998-255a33c143df nodeName:}" failed. No retries permitted until 2025-11-28 16:59:17.157031415 +0000 UTC m=+46.415331630 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs") pod "network-metrics-daemon-pwn66" (UID: "a6cf6922-30b9-4011-a998-255a33c143df") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.204278 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.204357 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.204381 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.204407 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.204424 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.307733 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.308161 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.308372 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.308582 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.308726 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.411523 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.411567 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.411578 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.411593 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.411603 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.515532 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.515591 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.515607 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.515631 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.515646 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.618622 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.619638 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.619835 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.620082 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.620273 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.729276 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.729633 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.729721 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.729843 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.729933 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.833441 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.833501 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.833516 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.833538 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.833550 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.936343 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.936400 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.936416 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.936438 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:13 crc kubenswrapper[4710]: I1128 16:59:13.936452 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:13Z","lastTransitionTime":"2025-11-28T16:59:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.039402 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.039447 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.039458 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.039476 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.039488 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.142564 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.142606 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.142620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.142638 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.142649 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.245464 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.245691 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.245827 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.245923 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.246005 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.349903 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.350317 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.350568 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.350741 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.350908 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.453242 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.453449 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.453518 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.453587 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.453657 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.557708 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.557809 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.557833 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.557863 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.557885 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.660842 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.660911 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.660933 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.660967 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.660991 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.764349 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.764417 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.764429 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.764454 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.764469 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.868504 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.868553 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.868563 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.868579 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.868589 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.971862 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.971922 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.971944 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.971995 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:14 crc kubenswrapper[4710]: I1128 16:59:14.972013 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:14Z","lastTransitionTime":"2025-11-28T16:59:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.074695 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.074768 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.074788 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.074804 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.074833 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.141025 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.141088 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.141136 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.141034 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:15 crc kubenswrapper[4710]: E1128 16:59:15.141297 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:15 crc kubenswrapper[4710]: E1128 16:59:15.141430 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:15 crc kubenswrapper[4710]: E1128 16:59:15.141588 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:15 crc kubenswrapper[4710]: E1128 16:59:15.141866 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.177942 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.178260 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.178383 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.178510 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.178635 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.281645 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.281690 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.281700 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.281716 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.281725 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.384134 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.384214 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.384247 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.384276 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.384297 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.486459 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.486845 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.487010 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.487143 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.487299 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.589798 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.589854 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.589869 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.589890 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.589904 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.691856 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.691899 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.691928 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.691949 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.691961 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.796375 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.796473 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.796497 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.796532 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.796557 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.899670 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.899730 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.899752 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.899808 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:15 crc kubenswrapper[4710]: I1128 16:59:15.899829 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:15Z","lastTransitionTime":"2025-11-28T16:59:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.002410 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.002456 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.002470 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.002517 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.002534 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.106244 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.106303 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.106316 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.106336 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.106353 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.209250 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.209308 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.209327 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.209351 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.209370 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.312684 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.312818 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.312847 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.312878 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.312901 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.416582 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.416641 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.416659 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.416684 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.416704 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.520508 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.520717 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.520732 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.520754 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.520784 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.624571 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.624631 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.624653 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.624683 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.624701 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.727649 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.727711 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.727735 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.727808 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.727849 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.831119 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.831187 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.831209 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.831238 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.831259 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.934974 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.935421 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.935938 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.936197 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:16 crc kubenswrapper[4710]: I1128 16:59:16.936404 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:16Z","lastTransitionTime":"2025-11-28T16:59:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.039648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.040081 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.040269 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.040414 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.040547 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.141424 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.141515 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:17 crc kubenswrapper[4710]: E1128 16:59:17.141646 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.141424 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:17 crc kubenswrapper[4710]: E1128 16:59:17.141831 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:17 crc kubenswrapper[4710]: E1128 16:59:17.141952 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.142521 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:17 crc kubenswrapper[4710]: E1128 16:59:17.142933 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.145264 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.145356 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.145453 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.145512 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.145590 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.203812 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:17 crc kubenswrapper[4710]: E1128 16:59:17.204220 4710 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:17 crc kubenswrapper[4710]: E1128 16:59:17.204378 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs podName:a6cf6922-30b9-4011-a998-255a33c143df nodeName:}" failed. No retries permitted until 2025-11-28 16:59:25.204351293 +0000 UTC m=+54.462651368 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs") pod "network-metrics-daemon-pwn66" (UID: "a6cf6922-30b9-4011-a998-255a33c143df") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.248838 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.248884 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.248893 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.248908 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.248917 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.351672 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.352004 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.352112 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.352270 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.352350 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.455751 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.456149 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.456344 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.456473 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.456625 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.559646 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.560025 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.560208 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.560392 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.560609 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.663841 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.663929 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.663953 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.663982 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.664003 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.768087 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.768133 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.768142 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.768160 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.768169 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.871308 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.871370 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.871393 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.871424 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.871454 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.973615 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.973657 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.973670 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.973688 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:17 crc kubenswrapper[4710]: I1128 16:59:17.973700 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:17Z","lastTransitionTime":"2025-11-28T16:59:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.076562 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.076611 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.076620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.076637 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.076647 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.179411 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.179452 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.179464 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.179484 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.179497 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.283177 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.283253 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.283274 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.283301 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.283319 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.386496 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.386546 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.386556 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.386573 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.386583 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.489132 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.489202 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.489217 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.489241 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.489260 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.591626 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.591745 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.591795 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.591813 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.591824 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.694748 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.694843 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.694857 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.694882 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.694897 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.798397 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.798485 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.798505 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.798529 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.798546 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.901093 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.901136 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.901149 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.901166 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:18 crc kubenswrapper[4710]: I1128 16:59:18.901180 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:18Z","lastTransitionTime":"2025-11-28T16:59:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.003838 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.003877 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.003888 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.003907 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.003919 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.106383 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.106435 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.106446 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.106466 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.106479 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.142075 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.142388 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.142208 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.142263 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:19 crc kubenswrapper[4710]: E1128 16:59:19.142652 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:19 crc kubenswrapper[4710]: E1128 16:59:19.142837 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:19 crc kubenswrapper[4710]: E1128 16:59:19.143096 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:19 crc kubenswrapper[4710]: E1128 16:59:19.143464 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.209926 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.209994 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.210012 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.210039 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.210059 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.313075 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.313511 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.313683 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.313860 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.314055 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.417036 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.417091 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.417104 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.417126 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.417140 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.520451 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.520713 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.520733 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.520801 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.520841 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.624133 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.624427 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.624810 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.625014 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.625192 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.729106 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.729430 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.729574 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.729805 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.729955 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.833576 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.834489 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.834644 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.834816 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.834982 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.907029 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.918839 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.935086 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.937615 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.937939 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.938169 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.938365 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.938579 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:19Z","lastTransitionTime":"2025-11-28T16:59:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.955118 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.973402 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:19 crc kubenswrapper[4710]: I1128 16:59:19.993153 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:19Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.014367 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.032040 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.042964 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.043052 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.043071 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.043099 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.043119 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.050822 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.070037 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.091144 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.103822 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.118255 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.130157 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.146177 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.146244 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.146267 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.146298 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.146321 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.150004 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.164391 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.176257 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.186987 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:20Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.249393 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.249451 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.249466 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.249487 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.249501 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.353128 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.353226 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.353244 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.353284 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.353306 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.458009 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.458079 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.458098 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.458123 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.458138 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.561450 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.561533 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.561558 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.561589 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.561611 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.665256 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.665367 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.665402 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.665444 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.665484 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.768341 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.768409 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.768433 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.768465 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.768487 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.872122 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.872195 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.872217 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.872245 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.872267 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.946625 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.946826 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 16:59:52.94679957 +0000 UTC m=+82.205099615 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.946878 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.946923 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.947018 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.947046 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947125 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947162 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947187 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947209 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947232 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947249 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947260 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947188 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947291 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:52.947260783 +0000 UTC m=+82.205560868 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947322 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:52.947308114 +0000 UTC m=+82.205608249 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947335 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:52.947330005 +0000 UTC m=+82.205630050 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:20 crc kubenswrapper[4710]: E1128 16:59:20.947348 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 16:59:52.947341515 +0000 UTC m=+82.205641560 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.976210 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.976271 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.976280 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.976296 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:20 crc kubenswrapper[4710]: I1128 16:59:20.976305 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:20Z","lastTransitionTime":"2025-11-28T16:59:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.079329 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.079378 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.079391 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.079409 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.079421 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.141408 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.141408 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:21 crc kubenswrapper[4710]: E1128 16:59:21.141604 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.141455 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.141665 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:21 crc kubenswrapper[4710]: E1128 16:59:21.141818 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:21 crc kubenswrapper[4710]: E1128 16:59:21.141986 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:21 crc kubenswrapper[4710]: E1128 16:59:21.142141 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.153320 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.165348 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.181603 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.181967 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.181997 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.182006 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.182022 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.182033 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.195239 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.211055 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.224695 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.238881 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.251365 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.262388 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.275295 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.284911 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.284954 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.284966 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.284987 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.285000 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.289292 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.300880 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.317396 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.334601 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.361526 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.377794 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.387733 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.387800 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.387819 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.387839 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.387851 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.388196 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:21Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.490908 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.490953 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.490963 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.490979 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.490990 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.594289 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.594358 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.594373 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.594392 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.594404 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.700269 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.700705 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.700933 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.701108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.701324 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.804956 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.805023 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.805041 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.805066 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.805079 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.909238 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.909310 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.909322 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.909349 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:21 crc kubenswrapper[4710]: I1128 16:59:21.909364 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:21Z","lastTransitionTime":"2025-11-28T16:59:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.011963 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.012016 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.012029 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.012052 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.012068 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.116161 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.116233 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.116246 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.116266 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.116279 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.219250 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.219330 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.219358 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.219388 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.219412 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.323140 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.323186 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.323200 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.323222 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.323230 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.426349 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.426414 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.426432 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.426455 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.426472 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.529083 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.529136 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.529147 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.529165 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.529177 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.632376 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.632605 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.632663 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.632724 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.632845 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.735750 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.735801 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.735812 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.735830 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.735842 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.839439 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.839733 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.839819 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.839919 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.839993 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.942742 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.943375 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.943468 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.943561 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:22 crc kubenswrapper[4710]: I1128 16:59:22.943657 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:22Z","lastTransitionTime":"2025-11-28T16:59:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.046528 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.046895 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.047023 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.047222 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.047337 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.132673 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.132894 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.132926 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.132957 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.132979 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.141515 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.141589 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.141556 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.141666 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.142076 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.142243 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.142507 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.142667 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.151590 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:23Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.155096 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.155226 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.155294 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.155407 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.155490 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.168203 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:23Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.171875 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.172124 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.172653 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.173159 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.173635 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.187521 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:23Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.196028 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.196289 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.196464 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.196648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.196857 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.212828 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:23Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.216690 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.216733 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.216793 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.216821 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.216840 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.227927 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:23Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:23 crc kubenswrapper[4710]: E1128 16:59:23.228085 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.229810 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.229846 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.229855 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.229871 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.229881 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.332293 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.332341 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.332356 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.332379 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.332392 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.434712 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.434808 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.434825 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.434846 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.434861 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.537716 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.537785 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.537797 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.537813 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.537826 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.640418 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.640751 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.640897 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.640991 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.641098 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.745020 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.745066 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.745082 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.745099 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.745110 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.848662 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.848925 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.848993 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.849066 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.849158 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.951551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.951598 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.951613 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.951630 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:23 crc kubenswrapper[4710]: I1128 16:59:23.951642 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:23Z","lastTransitionTime":"2025-11-28T16:59:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.055122 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.055201 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.055237 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.055271 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.055292 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.159118 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.159172 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.159189 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.159212 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.159225 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.262515 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.262576 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.262589 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.262611 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.262624 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.365441 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.365893 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.366077 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.366302 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.366435 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.470295 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.470359 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.470376 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.470400 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.470417 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.573370 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.573441 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.573458 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.573485 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.573507 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.676928 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.677006 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.677032 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.677062 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.677084 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.779145 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.779185 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.779196 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.779214 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.779225 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.881540 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.881599 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.881612 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.881633 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.881646 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.983969 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.984027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.984044 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.984066 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:24 crc kubenswrapper[4710]: I1128 16:59:24.984080 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:24Z","lastTransitionTime":"2025-11-28T16:59:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.086398 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.086443 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.086456 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.086472 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.086484 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.140902 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.140923 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.141346 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.141589 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:25 crc kubenswrapper[4710]: E1128 16:59:25.141631 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:25 crc kubenswrapper[4710]: E1128 16:59:25.141853 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:25 crc kubenswrapper[4710]: E1128 16:59:25.142580 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:25 crc kubenswrapper[4710]: E1128 16:59:25.142663 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.143006 4710 scope.go:117] "RemoveContainer" containerID="2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.190530 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.190722 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.190731 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.190747 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.190770 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.294108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.294181 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.294200 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.294228 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.294248 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.294430 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:25 crc kubenswrapper[4710]: E1128 16:59:25.294984 4710 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:25 crc kubenswrapper[4710]: E1128 16:59:25.295728 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs podName:a6cf6922-30b9-4011-a998-255a33c143df nodeName:}" failed. No retries permitted until 2025-11-28 16:59:41.295693673 +0000 UTC m=+70.553993728 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs") pod "network-metrics-daemon-pwn66" (UID: "a6cf6922-30b9-4011-a998-255a33c143df") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.396326 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.396366 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.396377 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.396394 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.396408 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.458393 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/1.log" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.461166 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.461948 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.478396 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.493938 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.498777 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.498827 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.498838 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.498857 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.498868 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.510580 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.522264 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.539862 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.559309 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.576968 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.591627 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.601136 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.601173 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.601184 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.601224 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.601236 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.609512 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.622392 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.636748 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.655267 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.678907 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.703224 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.704343 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.704393 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.704405 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.704422 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.704443 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.726061 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.748824 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.762030 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:25Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.806752 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.806988 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.807055 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.807127 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.807189 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.910319 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.910355 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.910364 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.910376 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:25 crc kubenswrapper[4710]: I1128 16:59:25.910385 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:25Z","lastTransitionTime":"2025-11-28T16:59:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.013099 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.013828 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.013848 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.013864 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.013874 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.115704 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.115746 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.115769 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.115789 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.115800 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.218656 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.218698 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.218709 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.218726 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.218738 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.321569 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.321639 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.321667 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.321696 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.321721 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.427868 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.427934 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.427955 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.427984 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.428005 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.530709 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.530982 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.531004 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.531036 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.531057 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.633669 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.633732 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.633745 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.633977 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.633992 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.737835 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.737920 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.737941 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.737970 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.737996 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.841119 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.841184 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.841199 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.841221 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.841236 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.944299 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.944341 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.944350 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.944364 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:26 crc kubenswrapper[4710]: I1128 16:59:26.944374 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:26Z","lastTransitionTime":"2025-11-28T16:59:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.046675 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.046710 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.046718 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.046731 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.046740 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.141553 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.141703 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:27 crc kubenswrapper[4710]: E1128 16:59:27.141870 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.142001 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.142029 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:27 crc kubenswrapper[4710]: E1128 16:59:27.142175 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:27 crc kubenswrapper[4710]: E1128 16:59:27.142333 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:27 crc kubenswrapper[4710]: E1128 16:59:27.142629 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.149208 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.149264 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.149284 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.149310 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.149329 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.252099 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.252167 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.252183 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.252208 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.252225 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.355705 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.355814 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.355840 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.355868 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.355889 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.458785 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.458828 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.458855 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.458872 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.458881 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.472129 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/2.log" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.473234 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/1.log" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.477420 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24" exitCode=1 Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.477464 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.477504 4710 scope.go:117] "RemoveContainer" containerID="2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.478745 4710 scope.go:117] "RemoveContainer" containerID="ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24" Nov 28 16:59:27 crc kubenswrapper[4710]: E1128 16:59:27.479058 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.503462 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.519484 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.532695 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.545152 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.562911 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.562992 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.563007 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.563028 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.563040 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.564882 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.583184 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.595064 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.615066 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.631898 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.651865 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.664537 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.667093 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.667130 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.667141 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.667158 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.667169 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.679393 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.693524 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.704288 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.715131 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.727872 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.738516 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:27Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.769500 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.769544 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.769555 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.769574 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.769587 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.872387 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.872440 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.872458 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.872482 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.872501 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.975367 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.975423 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.975440 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.975461 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:27 crc kubenswrapper[4710]: I1128 16:59:27.975476 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:27Z","lastTransitionTime":"2025-11-28T16:59:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.077933 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.077995 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.078012 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.078040 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.078057 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.181177 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.181218 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.181230 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.181246 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.181258 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.283952 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.283985 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.283994 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.284008 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.284020 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.386614 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.386674 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.386686 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.386705 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.386717 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.486427 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/2.log" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.491131 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.491231 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.491252 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.491277 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.491294 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.594002 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.594052 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.594063 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.594082 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.594095 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.696912 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.696967 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.696983 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.697005 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.697022 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.801056 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.801108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.801122 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.801145 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.801159 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.904528 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.904776 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.904790 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.904810 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:28 crc kubenswrapper[4710]: I1128 16:59:28.904826 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:28Z","lastTransitionTime":"2025-11-28T16:59:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.008309 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.008390 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.008425 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.008456 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.008479 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.112001 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.112049 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.112068 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.112091 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.112107 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.140743 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:29 crc kubenswrapper[4710]: E1128 16:59:29.140995 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.141022 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.141181 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:29 crc kubenswrapper[4710]: E1128 16:59:29.141240 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.141016 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:29 crc kubenswrapper[4710]: E1128 16:59:29.141486 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:29 crc kubenswrapper[4710]: E1128 16:59:29.141619 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.215612 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.215654 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.215667 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.215684 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.215696 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.318504 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.318568 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.318590 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.318620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.318639 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.421373 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.421448 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.421473 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.421503 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.421526 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.525474 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.525567 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.525588 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.525618 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.525649 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.628863 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.628923 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.628941 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.628966 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.628985 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.732953 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.733030 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.733047 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.733071 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.733087 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.836361 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.836462 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.836489 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.836517 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.836536 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.938862 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.938926 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.938944 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.938972 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:29 crc kubenswrapper[4710]: I1128 16:59:29.938990 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:29Z","lastTransitionTime":"2025-11-28T16:59:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.042562 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.042644 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.042667 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.042701 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.042724 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.146466 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.146504 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.146513 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.146529 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.146538 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.250174 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.250246 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.250259 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.250283 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.250298 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.352934 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.352974 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.352983 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.352996 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.353004 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.455369 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.455429 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.455438 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.455451 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.455461 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.559570 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.559639 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.559665 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.559703 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.559732 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.662259 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.662319 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.662329 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.662351 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.662363 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.764643 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.764707 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.764721 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.764747 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.764798 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.867946 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.868009 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.868026 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.868050 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.868066 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.971593 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.971739 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.971949 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.972027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:30 crc kubenswrapper[4710]: I1128 16:59:30.972047 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:30Z","lastTransitionTime":"2025-11-28T16:59:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.080866 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.080933 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.080953 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.080992 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.081011 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.141086 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.141096 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:31 crc kubenswrapper[4710]: E1128 16:59:31.141546 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.141203 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:31 crc kubenswrapper[4710]: E1128 16:59:31.141784 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:31 crc kubenswrapper[4710]: E1128 16:59:31.141990 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.142052 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:31 crc kubenswrapper[4710]: E1128 16:59:31.142329 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.157191 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.170667 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.181312 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.183643 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.183701 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.183718 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.183741 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.183779 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.194823 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.210028 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.223711 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.236541 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.249320 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.267274 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.277159 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.287666 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.287720 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.287731 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.287751 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.287783 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.287988 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.298344 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.307449 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.319116 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.331077 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.349002 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.365197 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:31Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.390339 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.390380 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.390391 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.390410 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.390420 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.492419 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.492450 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.492461 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.492475 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.492485 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.594660 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.594696 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.594705 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.594719 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.594728 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.697651 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.697714 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.697732 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.697786 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.697807 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.801009 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.801102 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.801116 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.801136 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.801150 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.904325 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.904450 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.904472 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.904537 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:31 crc kubenswrapper[4710]: I1128 16:59:31.904559 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:31Z","lastTransitionTime":"2025-11-28T16:59:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.007473 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.007538 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.007556 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.007583 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.007601 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.111370 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.111466 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.111485 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.111512 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.111530 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.215585 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.215652 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.215665 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.215683 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.215694 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.318223 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.318276 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.318286 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.318301 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.318311 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.421501 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.421551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.421561 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.421579 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.421589 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.523677 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.523739 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.523793 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.523822 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.523838 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.626715 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.626845 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.626872 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.626903 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.626933 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.729614 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.729666 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.729676 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.729694 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.729718 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.832238 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.832279 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.832287 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.832302 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.832313 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.935131 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.935176 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.935184 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.935198 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:32 crc kubenswrapper[4710]: I1128 16:59:32.935209 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:32Z","lastTransitionTime":"2025-11-28T16:59:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.037561 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.037602 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.037612 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.037626 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.037636 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.140069 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.140115 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.140129 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.140147 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.140158 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.140425 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.140443 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.140598 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.141102 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.141196 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.141548 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.141706 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.141900 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.243469 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.243524 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.243537 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.243555 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.243568 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.249450 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.249486 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.249499 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.249515 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.249524 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.267210 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:33Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.272284 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.272358 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.272377 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.272403 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.272423 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.292033 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:33Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.297313 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.297358 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.297368 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.297388 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.297400 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.316360 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:33Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.321806 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.321856 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.321866 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.321885 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.321896 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.337953 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:33Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.342317 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.342430 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.342493 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.342581 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.342641 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.353872 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:33Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:33 crc kubenswrapper[4710]: E1128 16:59:33.354112 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.356010 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.356065 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.356144 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.356192 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.356214 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.458923 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.458986 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.459004 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.459027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.459046 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.561823 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.561889 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.561907 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.561931 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.561953 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.665794 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.665848 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.665861 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.665880 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.665892 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.768391 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.768777 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.768908 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.769029 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.769102 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.872592 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.872651 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.872674 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.872869 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.872900 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.977064 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.977111 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.977121 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.977138 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:33 crc kubenswrapper[4710]: I1128 16:59:33.977148 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:33Z","lastTransitionTime":"2025-11-28T16:59:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.081356 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.081397 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.081415 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.081438 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.081503 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.183824 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.183877 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.183889 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.183907 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.183918 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.287812 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.287869 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.287887 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.287912 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.287934 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.390945 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.391022 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.391034 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.391074 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.391092 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.494409 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.494469 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.494484 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.494509 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.494527 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.597217 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.597256 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.597266 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.597280 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.597290 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.700674 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.700733 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.700790 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.700822 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.700858 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.804280 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.804340 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.804355 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.804378 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.804393 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.906573 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.906614 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.906624 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.906640 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:34 crc kubenswrapper[4710]: I1128 16:59:34.906652 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:34Z","lastTransitionTime":"2025-11-28T16:59:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.009544 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.010057 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.010161 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.010294 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.010379 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.112929 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.113204 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.113316 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.113423 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.113669 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.141263 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.141304 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.141263 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:35 crc kubenswrapper[4710]: E1128 16:59:35.141427 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.141450 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:35 crc kubenswrapper[4710]: E1128 16:59:35.141690 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:35 crc kubenswrapper[4710]: E1128 16:59:35.141839 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:35 crc kubenswrapper[4710]: E1128 16:59:35.141957 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.151582 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.216342 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.216596 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.216658 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.216752 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.216849 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.319364 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.319425 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.319439 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.319457 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.319469 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.422182 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.422590 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.422690 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.422798 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.422912 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.525177 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.525225 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.525236 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.525254 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.525266 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.627648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.628159 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.628261 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.628361 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.628470 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.731301 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.731344 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.731355 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.731370 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.731379 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.834375 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.834458 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.834475 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.834531 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.834547 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.938175 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.938216 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.938228 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.938245 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:35 crc kubenswrapper[4710]: I1128 16:59:35.938256 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:35Z","lastTransitionTime":"2025-11-28T16:59:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.040675 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.041065 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.041216 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.041356 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.041495 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.144468 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.144918 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.145123 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.145306 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.145431 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.247927 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.247964 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.247973 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.247988 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.248000 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.350037 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.350080 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.350094 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.350108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.350120 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.452867 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.453122 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.453136 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.453153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.453166 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.555437 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.555475 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.555488 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.555510 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.555522 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.658793 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.658840 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.658880 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.658903 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.658915 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.761188 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.761238 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.761251 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.761268 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.761277 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.869307 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.869374 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.869393 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.869429 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.869447 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.972715 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.972784 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.972840 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.973180 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:36 crc kubenswrapper[4710]: I1128 16:59:36.973226 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:36Z","lastTransitionTime":"2025-11-28T16:59:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.075457 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.075733 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.075873 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.075974 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.076070 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.140949 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.140976 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.140949 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:37 crc kubenswrapper[4710]: E1128 16:59:37.141129 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:37 crc kubenswrapper[4710]: E1128 16:59:37.141052 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.141354 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:37 crc kubenswrapper[4710]: E1128 16:59:37.141515 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:37 crc kubenswrapper[4710]: E1128 16:59:37.141339 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.179338 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.179370 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.179382 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.179397 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.179408 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.282345 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.282396 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.282413 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.282438 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.282455 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.385464 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.386428 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.386583 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.386731 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.386900 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.489902 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.489976 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.489989 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.490006 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.490015 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.592783 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.592828 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.592839 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.592857 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.592868 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.696503 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.696577 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.696602 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.696638 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.696663 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.799880 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.799923 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.799936 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.799955 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.799968 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.903082 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.903153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.903168 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.903282 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:37 crc kubenswrapper[4710]: I1128 16:59:37.903363 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:37Z","lastTransitionTime":"2025-11-28T16:59:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.006510 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.006550 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.006561 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.006578 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.006589 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.109245 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.109301 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.109331 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.109366 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.109379 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.212123 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.212161 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.212170 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.212184 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.212197 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.314616 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.314661 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.314671 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.314688 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.314699 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.417110 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.417145 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.417153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.417167 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.417176 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.519820 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.519924 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.519946 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.519977 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.519994 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.623344 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.623398 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.623411 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.623432 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.623446 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.726866 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.726915 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.726926 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.726941 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.726951 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.829859 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.829930 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.829943 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.829964 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.829978 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.933095 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.933157 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.933175 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.933201 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:38 crc kubenswrapper[4710]: I1128 16:59:38.933218 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:38Z","lastTransitionTime":"2025-11-28T16:59:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.036207 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.036319 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.036338 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.036366 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.036385 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.138790 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.139044 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.139142 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.139216 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.139286 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.141199 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:39 crc kubenswrapper[4710]: E1128 16:59:39.141403 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.141242 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:39 crc kubenswrapper[4710]: E1128 16:59:39.141607 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.141255 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.141217 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:39 crc kubenswrapper[4710]: E1128 16:59:39.142486 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:39 crc kubenswrapper[4710]: E1128 16:59:39.142715 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.242846 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.242917 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.242935 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.242966 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.242990 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.346262 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.346328 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.346344 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.346367 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.346383 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.448927 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.448981 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.448996 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.449013 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.449025 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.552144 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.552197 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.552209 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.552227 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.552243 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.655105 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.655152 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.655162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.655179 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.655190 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.758206 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.758265 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.758277 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.758297 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.758312 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.861092 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.861164 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.861190 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.861224 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.861249 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.963537 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.963622 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.963639 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.963657 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:39 crc kubenswrapper[4710]: I1128 16:59:39.963672 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:39Z","lastTransitionTime":"2025-11-28T16:59:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.067388 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.067432 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.067446 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.067465 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.067477 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.170654 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.170706 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.170717 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.170736 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.170746 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.273429 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.273464 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.273473 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.273488 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.273513 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.375686 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.375731 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.375740 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.375771 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.375781 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.479976 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.480048 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.480070 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.480089 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.480101 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.582226 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.582276 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.582293 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.582310 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.582320 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.684865 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.684912 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.684925 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.684944 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.684957 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.787285 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.787334 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.787347 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.787366 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.787382 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.889900 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.889943 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.889956 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.889970 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.889979 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.992443 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.992497 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.992514 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.992538 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:40 crc kubenswrapper[4710]: I1128 16:59:40.992555 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:40Z","lastTransitionTime":"2025-11-28T16:59:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.094533 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.094575 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.094587 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.094603 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.094614 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.141072 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.141105 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:41 crc kubenswrapper[4710]: E1128 16:59:41.141201 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.141218 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.141254 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:41 crc kubenswrapper[4710]: E1128 16:59:41.141443 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:41 crc kubenswrapper[4710]: E1128 16:59:41.141529 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:41 crc kubenswrapper[4710]: E1128 16:59:41.141612 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.152081 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.164704 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.179154 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.189173 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.196600 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.196641 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.196653 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.196671 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.196690 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.209315 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.231515 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.253793 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e470388c1aac38fb5bec60a39f822198e0b51a4d36dea587c069dc26f0c773e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"message\\\":\\\"iting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI1128 16:59:07.202113 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202123 6190 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202127 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9\\\\nI1128 16:59:07.202135 6190 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf\\\\nI1128 16:59:07.202141 6190 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-node-mzbq9 in node crc\\\\nF1128 16:59:07.202143 6190 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Po\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.267256 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.282024 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.299095 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.299164 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.299178 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.299197 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.299236 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.300207 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.316005 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.329543 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.344895 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.362607 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.379366 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.382183 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:41 crc kubenswrapper[4710]: E1128 16:59:41.382436 4710 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:41 crc kubenswrapper[4710]: E1128 16:59:41.382595 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs podName:a6cf6922-30b9-4011-a998-255a33c143df nodeName:}" failed. No retries permitted until 2025-11-28 17:00:13.382572494 +0000 UTC m=+102.640872549 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs") pod "network-metrics-daemon-pwn66" (UID: "a6cf6922-30b9-4011-a998-255a33c143df") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.394678 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.402115 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.402165 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.402179 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.402200 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.402214 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.409039 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.421252 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:41Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.505080 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.505132 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.505144 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.505162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.505175 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.607810 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.607852 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.607863 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.607881 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.607892 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.710920 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.710995 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.711018 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.711047 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.711068 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.814284 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.814367 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.814391 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.814426 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.814448 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.917030 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.917112 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.917131 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.917162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:41 crc kubenswrapper[4710]: I1128 16:59:41.917180 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:41Z","lastTransitionTime":"2025-11-28T16:59:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.020219 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.020272 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.020283 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.020302 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.020314 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.123619 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.123656 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.123669 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.123685 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.123696 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.141636 4710 scope.go:117] "RemoveContainer" containerID="ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24" Nov 28 16:59:42 crc kubenswrapper[4710]: E1128 16:59:42.141841 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.154200 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.164675 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.175112 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.186532 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.196292 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.210550 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.223772 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.229736 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.229803 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.229824 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.229847 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.229860 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.251824 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.265363 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.279615 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.291607 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.301533 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.313166 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.327795 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.332452 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.332481 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.332491 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.332507 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.332518 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.337901 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.349448 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.361492 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.371207 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:42Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.435620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.435668 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.435698 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.435718 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.435733 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.538840 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.539415 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.539520 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.539601 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.539677 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.642747 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.642817 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.642832 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.642854 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.642873 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.745055 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.745085 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.745094 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.745108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.745118 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.847431 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.847469 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.847481 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.847495 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.847505 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.950608 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.950661 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.950676 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.950698 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:42 crc kubenswrapper[4710]: I1128 16:59:42.950714 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:42Z","lastTransitionTime":"2025-11-28T16:59:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.053740 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.053822 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.053839 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.053867 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.053885 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.141120 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.141235 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.141339 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.141357 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.141529 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.141160 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.141656 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.141914 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.156651 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.156899 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.157021 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.157109 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.157175 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.259597 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.259887 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.259960 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.260027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.260081 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.363236 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.363280 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.363289 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.363305 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.363317 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.405965 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.406005 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.406013 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.406027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.406039 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.418998 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.423491 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.423544 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.423562 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.423585 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.423602 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.438099 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.441613 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.441652 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.441670 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.441692 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.441709 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.457228 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.461794 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.461832 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.461861 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.461879 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.461890 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.473097 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.477130 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.477164 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.477173 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.477188 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.477198 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.495565 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:43Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:43 crc kubenswrapper[4710]: E1128 16:59:43.495691 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.497162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.497190 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.497202 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.497215 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.497225 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.600371 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.600447 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.600461 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.600483 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.600497 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.703856 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.703934 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.703952 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.703981 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.704001 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.806509 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.806572 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.806595 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.806627 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.806650 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.908996 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.909035 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.909046 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.909062 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:43 crc kubenswrapper[4710]: I1128 16:59:43.909072 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:43Z","lastTransitionTime":"2025-11-28T16:59:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.011264 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.011334 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.011353 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.011378 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.011400 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.114483 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.114525 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.114543 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.114569 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.114585 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.216258 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.216520 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.216740 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.216904 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.216996 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.318949 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.319399 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.319640 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.319897 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.320053 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.422695 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.422735 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.422748 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.422782 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.422793 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.524674 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.524713 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.524722 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.524740 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.524750 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.627405 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.627449 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.627459 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.627474 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.627483 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.730050 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.730099 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.730113 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.730133 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.730147 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.832637 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.832879 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.832942 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.833007 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.833064 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.935394 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.935447 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.935456 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.935470 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:44 crc kubenswrapper[4710]: I1128 16:59:44.935480 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:44Z","lastTransitionTime":"2025-11-28T16:59:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.038569 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.038831 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.038905 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.038977 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.039059 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.140539 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.140843 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.140634 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.140610 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:45 crc kubenswrapper[4710]: E1128 16:59:45.141154 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:45 crc kubenswrapper[4710]: E1128 16:59:45.141397 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:45 crc kubenswrapper[4710]: E1128 16:59:45.141580 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:45 crc kubenswrapper[4710]: E1128 16:59:45.141729 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.142888 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.142923 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.142933 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.142947 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.142957 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.245875 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.245943 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.245963 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.245990 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.246007 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.348681 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.349032 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.349138 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.349873 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.349981 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.453542 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.453619 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.453656 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.453679 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.453691 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.557380 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.558040 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.558180 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.558344 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.558456 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.558943 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/0.log" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.559005 4710 generic.go:334] "Generic (PLEG): container finished" podID="b2ae360a-eba6-4e76-9942-83f5c21f3877" containerID="464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7" exitCode=1 Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.559041 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2j8nb" event={"ID":"b2ae360a-eba6-4e76-9942-83f5c21f3877","Type":"ContainerDied","Data":"464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.559490 4710 scope.go:117] "RemoveContainer" containerID="464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.581390 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.593974 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.607564 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.622581 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.640987 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.651213 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.661122 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.662442 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.662467 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.662478 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.662494 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.662505 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.673688 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.686241 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.695637 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.706888 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:44Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d\\\\n2025-11-28T16:58:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d to /host/opt/cni/bin/\\\\n2025-11-28T16:58:59Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.718069 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.729349 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.739261 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.752463 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.764359 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.764404 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.764414 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.764430 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.764443 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.765693 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.777235 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.790874 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:45Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.866894 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.866927 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.866938 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.866954 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.866965 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.970706 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.970749 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.970776 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.970791 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:45 crc kubenswrapper[4710]: I1128 16:59:45.970800 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:45Z","lastTransitionTime":"2025-11-28T16:59:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.073489 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.073551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.073563 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.073584 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.073596 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.177899 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.177939 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.177950 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.177966 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.177978 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.281153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.281232 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.281255 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.281285 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.281307 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.384620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.384727 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.384748 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.384806 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.384829 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.486992 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.487050 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.487068 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.487112 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.487131 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.564329 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/0.log" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.564392 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2j8nb" event={"ID":"b2ae360a-eba6-4e76-9942-83f5c21f3877","Type":"ContainerStarted","Data":"f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.577363 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.588399 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.589274 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.589322 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.589339 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.589363 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.589379 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.600956 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.612286 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.630188 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:44Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d\\\\n2025-11-28T16:58:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d to /host/opt/cni/bin/\\\\n2025-11-28T16:58:59Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.651197 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.673727 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.691205 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.692029 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.692080 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.692103 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.692135 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.692159 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.707150 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.725052 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.741892 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.760888 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.773465 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.787033 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.794628 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.794665 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.794682 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.794703 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.794721 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.805247 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.821492 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.834848 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.853609 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:46Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.898034 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.898085 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.898102 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.898127 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:46 crc kubenswrapper[4710]: I1128 16:59:46.898147 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:46Z","lastTransitionTime":"2025-11-28T16:59:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.001170 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.001240 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.001257 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.001283 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.001302 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.104977 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.105039 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.105050 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.105071 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.105084 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.141089 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.141122 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.141117 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.141342 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:47 crc kubenswrapper[4710]: E1128 16:59:47.141332 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:47 crc kubenswrapper[4710]: E1128 16:59:47.141457 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:47 crc kubenswrapper[4710]: E1128 16:59:47.141567 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:47 crc kubenswrapper[4710]: E1128 16:59:47.141694 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.208621 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.208684 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.208699 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.208718 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.208730 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.312127 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.312253 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.312286 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.312328 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.312360 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.414957 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.415335 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.415481 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.415611 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.415726 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.528936 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.529317 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.529508 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.529715 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.529974 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.633088 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.633148 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.633170 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.633198 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.633218 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.736450 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.736493 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.736508 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.736530 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.736546 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.840564 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.840644 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.840735 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.840833 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.840895 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.944001 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.944130 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.944158 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.944189 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:47 crc kubenswrapper[4710]: I1128 16:59:47.944210 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:47Z","lastTransitionTime":"2025-11-28T16:59:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.047488 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.047557 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.047576 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.047603 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.047623 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.150936 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.150978 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.150990 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.151008 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.151019 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.255207 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.255625 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.255916 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.256187 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.256342 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.359384 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.359446 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.359463 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.359488 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.359507 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.462523 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.462603 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.462628 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.462663 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.462687 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.565801 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.565855 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.565877 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.565903 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.565921 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.669797 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.669879 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.669899 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.669927 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.669947 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.773090 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.773158 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.773176 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.773206 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.773232 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.875813 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.875862 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.875874 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.875890 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.875902 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.979538 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.979576 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.979586 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.979603 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:48 crc kubenswrapper[4710]: I1128 16:59:48.979614 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:48Z","lastTransitionTime":"2025-11-28T16:59:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.083233 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.083342 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.083362 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.083388 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.083409 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.141082 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.141162 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.141191 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:49 crc kubenswrapper[4710]: E1128 16:59:49.141300 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:49 crc kubenswrapper[4710]: E1128 16:59:49.141401 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:49 crc kubenswrapper[4710]: E1128 16:59:49.141482 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.141841 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:49 crc kubenswrapper[4710]: E1128 16:59:49.142210 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.195114 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.195174 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.195191 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.195246 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.195263 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.297818 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.297876 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.297893 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.297918 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.297934 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.400539 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.401003 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.401075 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.401142 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.401245 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.503500 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.503560 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.503580 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.503603 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.503620 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.606188 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.606222 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.606233 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.606247 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.606257 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.709620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.709689 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.709708 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.709738 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.709800 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.812626 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.812678 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.812695 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.812718 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.812734 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.915435 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.915474 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.915485 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.915501 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:49 crc kubenswrapper[4710]: I1128 16:59:49.915511 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:49Z","lastTransitionTime":"2025-11-28T16:59:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.018506 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.018548 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.018558 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.018574 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.018584 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.121878 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.121963 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.121982 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.122275 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.122311 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.225644 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.225699 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.225716 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.225740 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.225793 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.328358 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.328405 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.328422 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.328444 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.328460 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.431073 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.431134 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.431152 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.431175 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.431192 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.534108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.534179 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.534202 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.534237 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.534272 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.637505 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.637569 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.637592 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.637625 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.637643 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.740149 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.740206 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.740224 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.740247 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.740264 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.842836 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.842927 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.842954 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.843017 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.843040 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.945734 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.945846 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.945869 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.945901 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:50 crc kubenswrapper[4710]: I1128 16:59:50.945924 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:50Z","lastTransitionTime":"2025-11-28T16:59:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.048624 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.048673 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.048693 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.048713 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.048729 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.141242 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.141385 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:51 crc kubenswrapper[4710]: E1128 16:59:51.141481 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.141520 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.141533 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:51 crc kubenswrapper[4710]: E1128 16:59:51.141673 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:51 crc kubenswrapper[4710]: E1128 16:59:51.141827 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:51 crc kubenswrapper[4710]: E1128 16:59:51.141941 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.154222 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.154311 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.154339 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.154374 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.154411 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.166206 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.193709 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.211891 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.225286 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.241704 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.258817 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.258901 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.258921 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.258949 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.258968 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.259579 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.271705 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.286328 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:44Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d\\\\n2025-11-28T16:58:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d to /host/opt/cni/bin/\\\\n2025-11-28T16:58:59Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.300186 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.315798 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.327790 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.338320 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.353088 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.362220 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.362289 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.362312 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.362336 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.362357 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.372191 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.387173 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.405291 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.416810 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.428054 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:51Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.465257 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.465303 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.465312 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.465329 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.465339 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.570122 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.570197 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.570216 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.570243 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.570264 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.673975 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.674074 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.674093 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.674150 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.674170 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.777266 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.777312 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.777323 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.777339 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.777350 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.881100 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.881164 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.881181 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.881207 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.881227 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.984722 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.984797 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.984810 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.984831 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:51 crc kubenswrapper[4710]: I1128 16:59:51.984849 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:51Z","lastTransitionTime":"2025-11-28T16:59:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.088948 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.089006 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.089023 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.089046 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.089066 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.192702 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.192773 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.192783 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.192802 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.192815 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.295154 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.295247 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.295273 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.295312 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.295340 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.397927 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.397990 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.398009 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.398033 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.398054 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.501584 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.501633 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.501645 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.501663 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.501675 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.605451 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.605515 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.605537 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.605570 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.605590 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.708439 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.708508 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.708526 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.708551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.708569 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.811720 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.811824 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.811843 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.811865 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.811882 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.914557 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.914633 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.914657 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.914689 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:52 crc kubenswrapper[4710]: I1128 16:59:52.914714 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:52Z","lastTransitionTime":"2025-11-28T16:59:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.009215 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.009339 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.009382 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.009441 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.009475 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009473 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009589 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009620 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009629 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009642 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009594 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.009572927 +0000 UTC m=+146.267873002 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009824 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009891 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009909 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.009845 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.009814505 +0000 UTC m=+146.268114590 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.010048 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.010005761 +0000 UTC m=+146.268305806 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.010073 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.010065323 +0000 UTC m=+146.268365368 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.010082 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 17:00:57.010077973 +0000 UTC m=+146.268378018 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.017140 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.017194 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.017218 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.017246 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.017267 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.121571 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.121641 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.121666 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.121695 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.121721 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.141255 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.141325 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.141412 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.141413 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.141546 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.141659 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.141804 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.141948 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.224466 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.224537 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.224575 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.224605 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.224627 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.328065 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.328132 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.328141 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.328157 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.328166 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.430944 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.431002 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.431018 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.431042 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.431059 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.534455 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.534518 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.534540 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.534586 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.534613 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.637438 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.637511 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.637537 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.637567 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.637589 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.740815 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.740885 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.740913 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.740943 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.740966 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.755832 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.755890 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.755906 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.755926 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.755944 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.777495 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.782845 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.782913 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.782933 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.782960 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.782978 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.801639 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.806514 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.806573 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.806639 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.806675 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.806699 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.823275 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.827549 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.827589 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.827600 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.827621 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.827634 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.845517 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.850663 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.850708 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.850728 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.850747 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.850780 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.865739 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:53Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:53 crc kubenswrapper[4710]: E1128 16:59:53.865937 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.868361 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.868427 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.868448 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.868475 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.868494 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.971379 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.971447 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.971469 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.971501 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:53 crc kubenswrapper[4710]: I1128 16:59:53.971520 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:53Z","lastTransitionTime":"2025-11-28T16:59:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.074651 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.074714 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.074738 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.074796 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.074821 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.178519 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.178598 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.178618 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.178645 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.178664 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.281555 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.281619 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.281636 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.281660 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.281678 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.385212 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.385277 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.385294 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.385321 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.385340 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.488450 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.488547 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.488584 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.488620 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.488644 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.591328 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.591380 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.591400 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.591419 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.591434 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.694015 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.694104 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.694128 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.694162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.694185 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.796865 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.796937 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.796951 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.796974 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.796992 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.899534 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.899602 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.899613 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.899629 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:54 crc kubenswrapper[4710]: I1128 16:59:54.899638 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:54Z","lastTransitionTime":"2025-11-28T16:59:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.003529 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.003568 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.003579 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.003593 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.003602 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.107071 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.107127 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.107146 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.107170 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.107190 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.141238 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:55 crc kubenswrapper[4710]: E1128 16:59:55.141531 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.141991 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:55 crc kubenswrapper[4710]: E1128 16:59:55.142150 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.142441 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:55 crc kubenswrapper[4710]: E1128 16:59:55.142563 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.143599 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:55 crc kubenswrapper[4710]: E1128 16:59:55.143751 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.144188 4710 scope.go:117] "RemoveContainer" containerID="ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.210467 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.210506 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.210516 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.210535 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.210545 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.314387 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.314452 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.314471 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.314499 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.314517 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.417608 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.417663 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.417675 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.417697 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.417709 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.520152 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.520202 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.520217 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.520236 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.520248 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.622401 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.622444 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.622455 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.622471 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.622482 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.725110 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.725139 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.725147 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.725160 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.725169 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.828336 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.828411 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.828423 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.828440 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.828456 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.931778 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.931851 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.931871 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.931901 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:55 crc kubenswrapper[4710]: I1128 16:59:55.931921 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:55Z","lastTransitionTime":"2025-11-28T16:59:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.034888 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.034925 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.034938 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.034955 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.034967 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.138140 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.138186 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.138198 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.138218 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.138230 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.240748 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.240806 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.240818 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.240833 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.240845 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.342948 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.342976 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.342984 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.342996 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.343005 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.445554 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.445614 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.445628 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.445648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.445661 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.547865 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.547915 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.547928 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.547973 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.547989 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.609011 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/2.log" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.612838 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.613681 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.633544 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.650037 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.651113 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.651143 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.651155 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.651181 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.651194 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.670289 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.689620 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.711458 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:44Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d\\\\n2025-11-28T16:58:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d to /host/opt/cni/bin/\\\\n2025-11-28T16:58:59Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.735977 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.755888 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.755972 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.755998 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.756032 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.756069 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.761091 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.777606 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.793564 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.807567 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.820118 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.831051 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.846903 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.860112 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.860166 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.860176 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.860195 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.860215 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.863593 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.877093 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.891493 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.903785 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.916491 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:56Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.963018 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.963058 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.963076 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.963097 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:56 crc kubenswrapper[4710]: I1128 16:59:56.963113 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:56Z","lastTransitionTime":"2025-11-28T16:59:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.066917 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.066994 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.067017 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.067140 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.067169 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.141119 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.141176 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.141129 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.141267 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:57 crc kubenswrapper[4710]: E1128 16:59:57.141454 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:57 crc kubenswrapper[4710]: E1128 16:59:57.141635 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:57 crc kubenswrapper[4710]: E1128 16:59:57.141754 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:57 crc kubenswrapper[4710]: E1128 16:59:57.141869 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.169740 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.169801 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.169813 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.169828 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.169842 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.273097 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.273161 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.273178 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.273206 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.273223 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.376187 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.376264 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.376299 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.376330 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.376368 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.479232 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.479311 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.479348 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.479382 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.479405 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.582552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.582656 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.582682 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.582715 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.582745 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.619150 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/3.log" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.620000 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/2.log" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.623728 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" exitCode=1 Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.623818 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.623918 4710 scope.go:117] "RemoveContainer" containerID="ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.624957 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 16:59:57 crc kubenswrapper[4710]: E1128 16:59:57.625252 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.650213 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec07bbb76b3a5a0f7ac986b57148c1cde4c838f697a15aee6c77774f90e10e24\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:26Z\\\",\\\"message\\\":\\\"formers/externalversions/factory.go:141\\\\nI1128 16:59:26.183491 6404 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183114 6404 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.183825 6404 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184010 6404 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1128 16:59:26.184063 6404 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1128 16:59:26.184435 6404 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1128 16:59:26.184584 6404 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1128 16:59:26.184648 6404 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1128 16:59:26.184653 6404 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1128 16:59:26.184925 6404 factory.go:656] Stopping watch factory\\\\nI1128 16:59:26.185008 6404 ovnkube.go:599] Stopped ovnkube\\\\nI1128 16:59:26.185093 6404 handler.go:208] Removed *v1.Node event handler 7\\\\nI11\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:57Z\\\",\\\"message\\\":\\\",Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.176],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1128 16:59:56.684551 6808 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to ve\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.665833 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.680654 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.684941 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.684995 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.685006 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.685023 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.685038 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.694332 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.707664 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.717481 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.731408 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:44Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d\\\\n2025-11-28T16:58:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d to /host/opt/cni/bin/\\\\n2025-11-28T16:58:59Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.745454 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.758465 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.770587 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.785549 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.787261 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.787292 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.787303 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.787318 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.787328 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.796688 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.810180 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.821743 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.834412 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.850076 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.862892 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.873709 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:57Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.889537 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.889587 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.889605 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.889630 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.889649 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.993181 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.993257 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.993280 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.993314 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:57 crc kubenswrapper[4710]: I1128 16:59:57.993339 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:57Z","lastTransitionTime":"2025-11-28T16:59:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.096781 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.096819 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.096830 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.096844 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.096856 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.199320 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.199399 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.199422 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.199453 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.199478 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.302136 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.302174 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.302185 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.302198 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.302207 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.405900 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.405979 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.406001 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.406033 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.406052 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.509793 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.509855 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.509885 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.509903 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.509916 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.613316 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.613392 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.613418 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.613451 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.613475 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.629020 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/3.log" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.632717 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 16:59:58 crc kubenswrapper[4710]: E1128 16:59:58.632927 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.648651 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.666008 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.678255 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.692628 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.705922 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.716097 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.716167 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.716180 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.716197 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.716230 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.716952 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.731987 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.742996 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.756619 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.767869 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.780339 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.807514 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.819049 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.819093 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.819103 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.819118 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.819129 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.851327 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.864089 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:44Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d\\\\n2025-11-28T16:58:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d to /host/opt/cni/bin/\\\\n2025-11-28T16:58:59Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.878776 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.900612 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:57Z\\\",\\\"message\\\":\\\",Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.176],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1128 16:59:56.684551 6808 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to ve\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.913594 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.921656 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.921723 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.921741 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.921793 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.921818 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:58Z","lastTransitionTime":"2025-11-28T16:59:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:58 crc kubenswrapper[4710]: I1128 16:59:58.925916 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T16:59:58Z is after 2025-08-24T17:21:41Z" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.024981 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.025045 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.025064 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.025089 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.025107 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.127734 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.127790 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.127800 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.127816 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.127826 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.141261 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.141333 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.141415 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 16:59:59 crc kubenswrapper[4710]: E1128 16:59:59.141470 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 16:59:59 crc kubenswrapper[4710]: E1128 16:59:59.141637 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.141497 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 16:59:59 crc kubenswrapper[4710]: E1128 16:59:59.141874 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 16:59:59 crc kubenswrapper[4710]: E1128 16:59:59.141898 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.231241 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.231289 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.231297 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.231313 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.231325 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.333648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.333723 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.333747 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.333821 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.333845 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.437749 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.437883 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.437909 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.437939 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.437963 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.541045 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.541122 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.541135 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.541160 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.541177 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.643713 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.643980 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.644020 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.644053 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.644074 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.746945 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.747038 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.747071 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.747104 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.747128 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.849635 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.849705 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.849725 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.849783 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.849808 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.952473 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.952523 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.952551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.952576 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 16:59:59 crc kubenswrapper[4710]: I1128 16:59:59.952590 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T16:59:59Z","lastTransitionTime":"2025-11-28T16:59:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.055474 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.055529 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.055544 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.055568 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.055584 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.157628 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.157666 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.157676 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.157695 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.157707 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.260349 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.260390 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.260398 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.260412 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.260422 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.362838 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.362874 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.362884 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.362899 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.362912 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.465068 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.465114 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.465125 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.465264 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.465277 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.568439 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.568492 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.568506 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.568527 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.568541 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.671799 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.671856 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.671873 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.671894 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.671908 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.774670 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.774718 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.774732 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.774751 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.774783 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.876780 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.876819 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.876827 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.876842 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.876852 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.979298 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.979389 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.979406 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.979428 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:00 crc kubenswrapper[4710]: I1128 17:00:00.979444 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:00Z","lastTransitionTime":"2025-11-28T17:00:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.082616 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.082673 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.082691 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.082744 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.082936 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.141391 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:01 crc kubenswrapper[4710]: E1128 17:00:01.141564 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.141584 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.141626 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.141629 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:01 crc kubenswrapper[4710]: E1128 17:00:01.141683 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:01 crc kubenswrapper[4710]: E1128 17:00:01.141975 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:01 crc kubenswrapper[4710]: E1128 17:00:01.142116 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.155480 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.168113 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.180035 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-26vk7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"31090e53-e553-42e8-a168-4e601ae0ccf0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8bb7a7b7f114c68e0dc3b245f928058642f7c56ad63c32d3afa8db85d661c5b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mhc4j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-26vk7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.185708 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.185769 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.185783 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.185799 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.185811 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.194399 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7cd88991-908e-4c47-a6c7-c2ded9e54311\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16228018c33e04102a840f7b6345ffb138e602eb67b06f75b84f2404bee9cf0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd605b2063cc7424e4f4d26db8e3a8fddd5134e897d6fd98a750ff72eaea5ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.208693 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f5b7a20-38bb-4311-98d0-0d6ab7b3154e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c08ef038087b974ba53f77eb457fdaa35a193dbdfcdb7d0853fb2f832694ff2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af755f6d6c30599e0e9c2ea7ed191d8194c55222a9c794daed5feb4f81582786\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4aede44421b9c342d415c39f9a58bd3c127212c0b95eb650cd319efc933dd66f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed33d3d3866530e7e545cb6a5c01600b4fbf8fec8f2bf123f11b42e829f810f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.223206 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63c689784690ad5fbcf4a763565fee49518e4e791855b53a34696ab0b304ed2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dde623938be36ec1d850333dc757f80b636de1972906cb909c911898bad78f0e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.233654 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mhrhv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ac18a0af-e029-40a2-a035-963326dd8738\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24d0fa98f64b19e53272bbeb0a3c85e9f58836e7a866c101feac90ae5e744509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wc9x8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mhrhv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.247960 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2j8nb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2ae360a-eba6-4e76-9942-83f5c21f3877\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:44Z\\\",\\\"message\\\":\\\"2025-11-28T16:58:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d\\\\n2025-11-28T16:58:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_04132c85-ecbc-4fe6-a2b0-4ca684735e4d to /host/opt/cni/bin/\\\\n2025-11-28T16:58:59Z [verbose] multus-daemon started\\\\n2025-11-28T16:58:59Z [verbose] Readiness Indicator file check\\\\n2025-11-28T16:59:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n5x7h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2j8nb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.261997 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f7bc0ce-8cd7-457d-8194-69354145dccc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de1b4ec5f23fa9274ed02b24a2d50d66e8523b2bb9bfad1bf19cc76b2ef2a838\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3648d54974563fa85e0c983746dd5d6b73488b4ec9fb5199dad72c752bcce52d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b3b14ea1a472c50409282b918e8d6f7151968940b0593c49d221313a87074ac6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://248d9602ee1a4b4b5b55c576a67249251e2c5d07990a3210f15aee01fe6a4261\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01945d0c51fcc7bdf4abffefe413498faf6c6eba73d65c786fce46e45af02b5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5efa7d3d0f0d658aefb66aa895202ae77a6e66a101008e864b1f7f490ff818d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0903431a2239454da6054caf474ff54461004f50b3f74d1d497bb72878e78ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:59:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q2fth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-t4jqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.278937 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bcf34ad7-9bed-49eb-ad10-20bc5825292a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-28T16:59:57Z\\\",\\\"message\\\":\\\",Selector:map[string]string{ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default,},ClusterIP:10.217.4.176,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.176],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1128 16:59:56.684551 6808 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for namespace Informer during admin network policy controller initialization, handler {0x1fcbf20 0x1fcbc00 0x1fcbba0} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to ve\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:59:55Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pzd6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-mzbq9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.291138 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.291214 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.291224 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.291244 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.291256 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.291259 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e03a307f-522c-480c-be7e-3ca520c12e49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04b9f4146e2d2561231cc874e8a223a52f7394c4f86cdd49874bad2f9c7e13a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://02e0386e677c128a211ad85e35a513718575f70c43178a362aa3f0f92619e6cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:59:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-t66cq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:07Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tktlf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.306834 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://07fc364acf4df6b2831d4e13b5bd73d611d99aa531a8f832e6484e11cb9411a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.319814 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.331327 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca87069-1d78-4e20-ba15-f37acec7135b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e6b7b004ea97d6e37be412bed5a6e0fa93c03cd645fe42407ca5d57dc1c2309\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bpvcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-9mscc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.346173 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-pwn66" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a6cf6922-30b9-4011-a998-255a33c143df\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zw5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:59:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-pwn66\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.359389 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"451cc0a2-73a5-4317-9bb3-6b896a5ece97\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:59:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-28T16:58:49Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1128 16:58:43.539252 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1128 16:58:43.541460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1194371240/tls.crt::/tmp/serving-cert-1194371240/tls.key\\\\\\\"\\\\nI1128 16:58:49.047209 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1128 16:58:49.051685 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1128 16:58:49.051858 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1128 16:58:49.051963 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1128 16:58:49.052020 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1128 16:58:49.062125 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1128 16:58:49.062196 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062253 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1128 16:58:49.062269 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1128 16:58:49.062280 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1128 16:58:49.062289 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1128 16:58:49.062303 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1128 16:58:49.062144 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1128 16:58:49.063869 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-28T16:58:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.371794 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8f7f8e2-1f72-48b3-8fbb-20dc6d77cbe1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3176b970e4d5c87393df6e66894974c74b8c2b6466199775befc31c07dffe71a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://35d2e0c1f6207cfdb587b96ad712fc77c6503484c93d9271453a8dab04e43a64\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf683646ff149aa68b9a19388d3f0a746c4f502edcae30a05b1fc7fe0c664db8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-28T16:58:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.385954 4710 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-28T16:58:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27602da5bfca3597f87a96c7c33e45387725c835a96ca70c8b01f868010a64b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-28T16:58:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:01Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.393882 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.393923 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.393934 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.393951 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.393964 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.497004 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.497051 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.497061 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.497080 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.497092 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.600155 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.600204 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.600216 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.600234 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.600246 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.703102 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.703151 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.703162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.703182 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.703195 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.806874 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.807145 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.807214 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.807288 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.807365 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.909910 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.910193 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.910289 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.910406 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:01 crc kubenswrapper[4710]: I1128 17:00:01.910526 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:01Z","lastTransitionTime":"2025-11-28T17:00:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.013112 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.013162 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.013175 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.013199 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.013213 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.115415 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.115454 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.115467 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.115485 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.115497 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.218204 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.218271 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.218288 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.218315 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.218364 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.320534 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.320587 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.320602 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.320622 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.320634 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.423499 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.423561 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.423580 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.423607 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.423625 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.526712 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.526824 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.526851 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.526903 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.526925 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.630295 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.630362 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.630386 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.630418 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.630442 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.733893 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.733945 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.733963 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.733981 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.733991 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.836449 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.836497 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.836506 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.836521 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.836530 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.939080 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.939145 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.939163 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.939188 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:02 crc kubenswrapper[4710]: I1128 17:00:02.939208 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:02Z","lastTransitionTime":"2025-11-28T17:00:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.041069 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.041110 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.041119 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.041132 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.041143 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.140942 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.141037 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.141048 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:03 crc kubenswrapper[4710]: E1128 17:00:03.141239 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:03 crc kubenswrapper[4710]: E1128 17:00:03.141285 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.141320 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:03 crc kubenswrapper[4710]: E1128 17:00:03.141658 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:03 crc kubenswrapper[4710]: E1128 17:00:03.141787 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.143285 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.143341 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.143353 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.143372 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.143391 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.246085 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.246139 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.246151 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.246169 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.246189 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.348983 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.349047 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.349065 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.349091 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.349109 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.451550 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.451600 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.451614 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.451638 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.451656 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.554569 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.554635 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.554648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.554671 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.554684 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.656933 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.656986 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.656998 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.657016 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.657030 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.759563 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.759624 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.759641 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.759660 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.759673 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.862924 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.862992 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.863005 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.863027 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.863041 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.948486 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.948536 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.948546 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.948563 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.948574 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: E1128 17:00:03.969676 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:03Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.973984 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.974034 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.974043 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.974057 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.974068 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:03 crc kubenswrapper[4710]: E1128 17:00:03.989816 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:03Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.994659 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.994699 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.994711 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.994726 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:03 crc kubenswrapper[4710]: I1128 17:00:03.994737 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:03Z","lastTransitionTime":"2025-11-28T17:00:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: E1128 17:00:04.009060 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:04Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.014316 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.014378 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.014396 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.014421 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.014436 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: E1128 17:00:04.030892 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:04Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.035357 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.035408 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.035420 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.035442 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.035455 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: E1128 17:00:04.051097 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-28T17:00:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a3da3522-f4c2-42e2-89ac-39d27db90382\\\",\\\"systemUUID\\\":\\\"56ee7c25-214c-4ce4-aeb2-2eaf54b784dc\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-28T17:00:04Z is after 2025-08-24T17:21:41Z" Nov 28 17:00:04 crc kubenswrapper[4710]: E1128 17:00:04.051259 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.053119 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.053166 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.053179 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.053198 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.053211 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.155660 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.155700 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.155713 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.155730 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.155745 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.258577 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.258629 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.258647 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.258673 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.258712 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.362978 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.363017 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.363028 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.363045 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.363057 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.465993 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.466082 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.466114 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.466145 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.466248 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.569057 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.569098 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.569108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.569126 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.569137 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.671522 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.671587 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.671604 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.671627 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.671731 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.774369 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.774423 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.774433 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.774450 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.774461 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.876791 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.876859 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.876871 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.876890 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.876903 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.978977 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.979046 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.979064 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.979108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:04 crc kubenswrapper[4710]: I1128 17:00:04.979126 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:04Z","lastTransitionTime":"2025-11-28T17:00:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.081912 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.081966 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.081978 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.081996 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.082008 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.141490 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.141488 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.141606 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:05 crc kubenswrapper[4710]: E1128 17:00:05.141650 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.141606 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:05 crc kubenswrapper[4710]: E1128 17:00:05.141798 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:05 crc kubenswrapper[4710]: E1128 17:00:05.141946 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:05 crc kubenswrapper[4710]: E1128 17:00:05.142125 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.185275 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.185331 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.185349 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.185377 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.185393 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.287980 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.288046 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.288062 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.288083 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.288097 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.390526 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.390577 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.390592 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.390611 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.390624 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.493420 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.493483 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.493502 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.493521 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.493532 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.596702 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.596839 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.596860 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.596897 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.596925 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.699002 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.699035 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.699045 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.699062 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.699073 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.802276 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.802658 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.802676 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.802701 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.802719 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.906201 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.906264 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.906282 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.906308 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:05 crc kubenswrapper[4710]: I1128 17:00:05.906327 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:05Z","lastTransitionTime":"2025-11-28T17:00:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.008525 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.008566 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.008577 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.008597 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.008609 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.112121 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.112185 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.112202 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.112229 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.112247 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.215007 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.215048 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.215057 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.215073 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.215083 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.317851 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.317894 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.317903 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.317924 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.317933 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.420555 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.420611 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.420629 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.420655 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.420670 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.524115 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.524163 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.524176 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.524193 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.524206 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.627285 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.627345 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.627361 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.627387 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.627404 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.730482 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.730534 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.730552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.730575 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.730591 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.833007 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.833082 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.833107 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.833138 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.833157 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.935626 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.935672 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.935681 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.935699 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:06 crc kubenswrapper[4710]: I1128 17:00:06.935710 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:06Z","lastTransitionTime":"2025-11-28T17:00:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.038918 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.038969 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.038981 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.039002 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.039013 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.140606 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.140680 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.140715 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.140866 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:07 crc kubenswrapper[4710]: E1128 17:00:07.140850 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:07 crc kubenswrapper[4710]: E1128 17:00:07.141135 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:07 crc kubenswrapper[4710]: E1128 17:00:07.141183 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:07 crc kubenswrapper[4710]: E1128 17:00:07.141292 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.142595 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.142689 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.142816 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.142889 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.142957 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.246228 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.246311 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.246326 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.246345 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.246358 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.349976 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.350009 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.350021 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.350037 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.350049 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.452551 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.452830 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.453007 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.453104 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.453169 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.556397 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.556471 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.556496 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.556528 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.556590 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.660161 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.660201 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.660212 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.660267 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.660280 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.762943 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.763020 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.763054 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.763086 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.763107 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.865599 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.865677 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.865700 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.865728 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.865748 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.968376 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.968861 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.969168 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.969416 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:07 crc kubenswrapper[4710]: I1128 17:00:07.969630 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:07Z","lastTransitionTime":"2025-11-28T17:00:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.073108 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.073163 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.073182 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.073209 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.073234 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.176419 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.176492 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.176520 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.176548 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.176569 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.279499 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.279574 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.279599 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.279632 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.279649 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.382660 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.383605 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.383787 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.383950 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.384105 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.487526 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.487586 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.487613 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.487646 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.487673 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.590453 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.590532 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.590552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.590583 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.590601 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.693709 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.694072 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.694173 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.694276 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.694363 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.797967 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.798395 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.798547 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.798699 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.798984 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.902278 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.902354 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.902378 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.902406 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:08 crc kubenswrapper[4710]: I1128 17:00:08.902423 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:08Z","lastTransitionTime":"2025-11-28T17:00:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.006442 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.006507 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.006530 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.006559 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.006583 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.110083 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.110170 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.110191 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.110218 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.110236 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.141534 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:09 crc kubenswrapper[4710]: E1128 17:00:09.142013 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.142041 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.142149 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:09 crc kubenswrapper[4710]: E1128 17:00:09.142203 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:09 crc kubenswrapper[4710]: E1128 17:00:09.142296 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.142353 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:09 crc kubenswrapper[4710]: E1128 17:00:09.142827 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.213682 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.213732 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.213749 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.213810 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.213832 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.317179 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.317238 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.317250 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.317280 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.317294 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.421633 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.421690 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.421706 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.421731 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.421753 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.524573 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.524622 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.524633 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.524648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.524659 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.628230 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.628297 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.628320 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.628350 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.628376 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.731552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.731606 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.731622 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.731645 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.731662 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.834517 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.834582 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.834599 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.834675 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.834717 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.937325 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.937363 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.937372 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.937386 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:09 crc kubenswrapper[4710]: I1128 17:00:09.937395 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:09Z","lastTransitionTime":"2025-11-28T17:00:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.044127 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.044197 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.044212 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.044233 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.044253 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.147614 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.147710 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.147736 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.147812 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.147833 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.251864 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.251967 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.252026 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.252049 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.252100 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.354590 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.354636 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.354648 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.354665 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.354678 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.457950 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.457998 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.458012 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.458036 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.458054 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.560836 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.560893 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.560906 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.560927 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.560938 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.663487 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.663558 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.663574 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.663603 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.663618 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.767716 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.767859 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.767884 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.767916 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.767937 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.871270 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.871315 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.871325 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.871346 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:10 crc kubenswrapper[4710]: I1128 17:00:10.871357 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:10Z","lastTransitionTime":"2025-11-28T17:00:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.018974 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.019051 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.019070 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.019153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.019179 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.122839 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.122894 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.122911 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.122935 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.122952 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.140746 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.140917 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.141008 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.141015 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:11 crc kubenswrapper[4710]: E1128 17:00:11.141003 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:11 crc kubenswrapper[4710]: E1128 17:00:11.141182 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:11 crc kubenswrapper[4710]: E1128 17:00:11.141291 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:11 crc kubenswrapper[4710]: E1128 17:00:11.141940 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.142402 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:00:11 crc kubenswrapper[4710]: E1128 17:00:11.142673 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.200008 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=82.199979647 podStartE2EDuration="1m22.199979647s" podCreationTimestamp="2025-11-28 16:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.175641674 +0000 UTC m=+100.433941769" watchObservedRunningTime="2025-11-28 17:00:11.199979647 +0000 UTC m=+100.458279722" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.225466 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.225543 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.225557 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.225578 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.225615 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.230229 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.230214049 podStartE2EDuration="1m16.230214049s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.202219268 +0000 UTC m=+100.460519373" watchObservedRunningTime="2025-11-28 17:00:11.230214049 +0000 UTC m=+100.488514104" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.287328 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-26vk7" podStartSLOduration=77.287299434 podStartE2EDuration="1m17.287299434s" podCreationTimestamp="2025-11-28 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.287113948 +0000 UTC m=+100.545414013" watchObservedRunningTime="2025-11-28 17:00:11.287299434 +0000 UTC m=+100.545599489" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.311811 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-t4jqb" podStartSLOduration=76.31172475 podStartE2EDuration="1m16.31172475s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.31016432 +0000 UTC m=+100.568464375" watchObservedRunningTime="2025-11-28 17:00:11.31172475 +0000 UTC m=+100.570024835" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.328408 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.328470 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.328488 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.328513 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.328533 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.366882 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tktlf" podStartSLOduration=76.36685331300001 podStartE2EDuration="1m16.366853313s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.366811822 +0000 UTC m=+100.625111907" watchObservedRunningTime="2025-11-28 17:00:11.366853313 +0000 UTC m=+100.625153368" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.382015 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=36.381987929 podStartE2EDuration="36.381987929s" podCreationTimestamp="2025-11-28 16:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.380652127 +0000 UTC m=+100.638952172" watchObservedRunningTime="2025-11-28 17:00:11.381987929 +0000 UTC m=+100.640288014" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.396491 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=52.396468855 podStartE2EDuration="52.396468855s" podCreationTimestamp="2025-11-28 16:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.395667059 +0000 UTC m=+100.653967104" watchObservedRunningTime="2025-11-28 17:00:11.396468855 +0000 UTC m=+100.654768920" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.432076 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.432114 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.432125 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.432142 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.432153 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.450645 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mhrhv" podStartSLOduration=77.450617586 podStartE2EDuration="1m17.450617586s" podCreationTimestamp="2025-11-28 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.42958646 +0000 UTC m=+100.687886585" watchObservedRunningTime="2025-11-28 17:00:11.450617586 +0000 UTC m=+100.708917661" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.451791 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2j8nb" podStartSLOduration=76.451748352 podStartE2EDuration="1m16.451748352s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.449298514 +0000 UTC m=+100.707598569" watchObservedRunningTime="2025-11-28 17:00:11.451748352 +0000 UTC m=+100.710048437" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.515232 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podStartSLOduration=76.515203273 podStartE2EDuration="1m16.515203273s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:11.498859988 +0000 UTC m=+100.757160073" watchObservedRunningTime="2025-11-28 17:00:11.515203273 +0000 UTC m=+100.773503348" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.534495 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.534548 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.534564 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.534584 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.534596 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.637244 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.637300 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.637313 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.637333 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.637347 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.740953 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.741006 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.741025 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.741047 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.741066 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.845050 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.845131 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.845153 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.845189 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.845211 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.948327 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.948411 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.948433 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.948465 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:11 crc kubenswrapper[4710]: I1128 17:00:11.948487 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:11Z","lastTransitionTime":"2025-11-28T17:00:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.051662 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.051734 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.051818 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.051849 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.051871 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.154960 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.155007 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.155029 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.155052 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.155066 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.258853 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.258928 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.258946 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.258972 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.258990 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.362724 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.362793 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.362809 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.362830 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.362850 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.465609 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.465680 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.465704 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.465739 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.465800 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.570614 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.570717 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.570746 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.570818 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.570859 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.674852 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.674933 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.674952 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.674978 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.674997 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.777860 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.777906 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.777915 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.777931 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.777940 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.880889 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.880948 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.880967 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.880990 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.881006 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.983524 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.983616 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.983638 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.983667 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:12 crc kubenswrapper[4710]: I1128 17:00:12.983686 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:12Z","lastTransitionTime":"2025-11-28T17:00:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.087159 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.087245 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.087269 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.087301 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.087326 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.140835 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.140837 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.140921 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.141372 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:13 crc kubenswrapper[4710]: E1128 17:00:13.141563 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:13 crc kubenswrapper[4710]: E1128 17:00:13.141660 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:13 crc kubenswrapper[4710]: E1128 17:00:13.141869 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:13 crc kubenswrapper[4710]: E1128 17:00:13.142050 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.190155 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.190192 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.190201 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.190213 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.190221 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.293107 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.293139 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.293147 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.293160 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.293170 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.397384 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.397463 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.397488 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.397519 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.397542 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.444538 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:13 crc kubenswrapper[4710]: E1128 17:00:13.444821 4710 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 17:00:13 crc kubenswrapper[4710]: E1128 17:00:13.444951 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs podName:a6cf6922-30b9-4011-a998-255a33c143df nodeName:}" failed. No retries permitted until 2025-11-28 17:01:17.444914643 +0000 UTC m=+166.703214748 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs") pod "network-metrics-daemon-pwn66" (UID: "a6cf6922-30b9-4011-a998-255a33c143df") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.500476 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.500554 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.500590 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.500635 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.500661 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.604515 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.604577 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.604595 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.604621 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.604639 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.707526 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.707569 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.707580 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.707601 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.707614 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.810739 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.810837 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.810853 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.810875 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.810890 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.913280 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.913330 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.913340 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.913357 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:13 crc kubenswrapper[4710]: I1128 17:00:13.913368 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:13Z","lastTransitionTime":"2025-11-28T17:00:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.016512 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.016552 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.016562 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.016578 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.016588 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:14Z","lastTransitionTime":"2025-11-28T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.119226 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.119274 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.119286 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.119303 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.119321 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:14Z","lastTransitionTime":"2025-11-28T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.223253 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.223305 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.223321 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.223348 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.223365 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:14Z","lastTransitionTime":"2025-11-28T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.249285 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.249371 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.249385 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.249420 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.249436 4710 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-28T17:00:14Z","lastTransitionTime":"2025-11-28T17:00:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.294749 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw"] Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.295974 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.298879 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.299019 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.299721 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.299778 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.355398 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/45686c7a-ea7e-44bf-a2e5-4b2557b39305-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.355476 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45686c7a-ea7e-44bf-a2e5-4b2557b39305-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.355542 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45686c7a-ea7e-44bf-a2e5-4b2557b39305-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.355648 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/45686c7a-ea7e-44bf-a2e5-4b2557b39305-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.355682 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45686c7a-ea7e-44bf-a2e5-4b2557b39305-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.457137 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/45686c7a-ea7e-44bf-a2e5-4b2557b39305-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.457247 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45686c7a-ea7e-44bf-a2e5-4b2557b39305-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.457273 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/45686c7a-ea7e-44bf-a2e5-4b2557b39305-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.457299 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/45686c7a-ea7e-44bf-a2e5-4b2557b39305-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.457363 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/45686c7a-ea7e-44bf-a2e5-4b2557b39305-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.457376 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45686c7a-ea7e-44bf-a2e5-4b2557b39305-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.457457 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45686c7a-ea7e-44bf-a2e5-4b2557b39305-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.458392 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/45686c7a-ea7e-44bf-a2e5-4b2557b39305-service-ca\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.464079 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/45686c7a-ea7e-44bf-a2e5-4b2557b39305-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.476337 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/45686c7a-ea7e-44bf-a2e5-4b2557b39305-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-4zknw\" (UID: \"45686c7a-ea7e-44bf-a2e5-4b2557b39305\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.610994 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" Nov 28 17:00:14 crc kubenswrapper[4710]: I1128 17:00:14.687083 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" event={"ID":"45686c7a-ea7e-44bf-a2e5-4b2557b39305","Type":"ContainerStarted","Data":"22b2e76bd70e38d788af4fec961e06426e9244deb76c98ca50ff5ec5c95be1d9"} Nov 28 17:00:15 crc kubenswrapper[4710]: I1128 17:00:15.141511 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:15 crc kubenswrapper[4710]: I1128 17:00:15.141570 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:15 crc kubenswrapper[4710]: I1128 17:00:15.141642 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:15 crc kubenswrapper[4710]: E1128 17:00:15.141653 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:15 crc kubenswrapper[4710]: I1128 17:00:15.141749 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:15 crc kubenswrapper[4710]: E1128 17:00:15.141852 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:15 crc kubenswrapper[4710]: E1128 17:00:15.141970 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:15 crc kubenswrapper[4710]: E1128 17:00:15.142156 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:15 crc kubenswrapper[4710]: I1128 17:00:15.158023 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 28 17:00:15 crc kubenswrapper[4710]: I1128 17:00:15.691963 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" event={"ID":"45686c7a-ea7e-44bf-a2e5-4b2557b39305","Type":"ContainerStarted","Data":"b1bfc66b61001f59e9eecc7288a0b56cc9f94c17ccb3d07764cef8a3342390a5"} Nov 28 17:00:15 crc kubenswrapper[4710]: I1128 17:00:15.720442 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=0.720425794 podStartE2EDuration="720.425794ms" podCreationTimestamp="2025-11-28 17:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:15.719838465 +0000 UTC m=+104.978138510" watchObservedRunningTime="2025-11-28 17:00:15.720425794 +0000 UTC m=+104.978725839" Nov 28 17:00:15 crc kubenswrapper[4710]: I1128 17:00:15.736567 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-4zknw" podStartSLOduration=80.736546612 podStartE2EDuration="1m20.736546612s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:15.73618436 +0000 UTC m=+104.994484405" watchObservedRunningTime="2025-11-28 17:00:15.736546612 +0000 UTC m=+104.994846657" Nov 28 17:00:17 crc kubenswrapper[4710]: I1128 17:00:17.140476 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:17 crc kubenswrapper[4710]: I1128 17:00:17.140566 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:17 crc kubenswrapper[4710]: I1128 17:00:17.140513 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:17 crc kubenswrapper[4710]: E1128 17:00:17.140697 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:17 crc kubenswrapper[4710]: E1128 17:00:17.140895 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:17 crc kubenswrapper[4710]: E1128 17:00:17.141001 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:17 crc kubenswrapper[4710]: I1128 17:00:17.141240 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:17 crc kubenswrapper[4710]: E1128 17:00:17.141337 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:19 crc kubenswrapper[4710]: I1128 17:00:19.140749 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:19 crc kubenswrapper[4710]: I1128 17:00:19.140816 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:19 crc kubenswrapper[4710]: E1128 17:00:19.142089 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:19 crc kubenswrapper[4710]: I1128 17:00:19.140922 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:19 crc kubenswrapper[4710]: I1128 17:00:19.140881 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:19 crc kubenswrapper[4710]: E1128 17:00:19.142200 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:19 crc kubenswrapper[4710]: E1128 17:00:19.141968 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:19 crc kubenswrapper[4710]: E1128 17:00:19.142285 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:21 crc kubenswrapper[4710]: I1128 17:00:21.140438 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:21 crc kubenswrapper[4710]: I1128 17:00:21.140549 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:21 crc kubenswrapper[4710]: I1128 17:00:21.140615 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:21 crc kubenswrapper[4710]: E1128 17:00:21.140645 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:21 crc kubenswrapper[4710]: I1128 17:00:21.140699 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:21 crc kubenswrapper[4710]: E1128 17:00:21.142545 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:21 crc kubenswrapper[4710]: E1128 17:00:21.142696 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:21 crc kubenswrapper[4710]: E1128 17:00:21.142808 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:23 crc kubenswrapper[4710]: I1128 17:00:23.141450 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:23 crc kubenswrapper[4710]: I1128 17:00:23.141515 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:23 crc kubenswrapper[4710]: I1128 17:00:23.141450 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:23 crc kubenswrapper[4710]: E1128 17:00:23.141613 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:23 crc kubenswrapper[4710]: I1128 17:00:23.141678 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:23 crc kubenswrapper[4710]: E1128 17:00:23.141792 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:23 crc kubenswrapper[4710]: E1128 17:00:23.141966 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:23 crc kubenswrapper[4710]: E1128 17:00:23.142161 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:23 crc kubenswrapper[4710]: I1128 17:00:23.143111 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:00:23 crc kubenswrapper[4710]: E1128 17:00:23.143317 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 17:00:25 crc kubenswrapper[4710]: I1128 17:00:25.140922 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:25 crc kubenswrapper[4710]: I1128 17:00:25.140965 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:25 crc kubenswrapper[4710]: I1128 17:00:25.140938 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:25 crc kubenswrapper[4710]: E1128 17:00:25.141068 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:25 crc kubenswrapper[4710]: E1128 17:00:25.141130 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:25 crc kubenswrapper[4710]: E1128 17:00:25.141188 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:25 crc kubenswrapper[4710]: I1128 17:00:25.141330 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:25 crc kubenswrapper[4710]: E1128 17:00:25.141392 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:27 crc kubenswrapper[4710]: I1128 17:00:27.141575 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:27 crc kubenswrapper[4710]: I1128 17:00:27.141579 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:27 crc kubenswrapper[4710]: I1128 17:00:27.142225 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:27 crc kubenswrapper[4710]: I1128 17:00:27.142306 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:27 crc kubenswrapper[4710]: E1128 17:00:27.142319 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:27 crc kubenswrapper[4710]: E1128 17:00:27.142434 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:27 crc kubenswrapper[4710]: E1128 17:00:27.142591 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:27 crc kubenswrapper[4710]: E1128 17:00:27.142672 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:29 crc kubenswrapper[4710]: I1128 17:00:29.141291 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:29 crc kubenswrapper[4710]: I1128 17:00:29.141481 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:29 crc kubenswrapper[4710]: I1128 17:00:29.141526 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:29 crc kubenswrapper[4710]: I1128 17:00:29.141576 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:29 crc kubenswrapper[4710]: E1128 17:00:29.141664 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:29 crc kubenswrapper[4710]: E1128 17:00:29.142102 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:29 crc kubenswrapper[4710]: E1128 17:00:29.142273 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:29 crc kubenswrapper[4710]: E1128 17:00:29.142375 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.140994 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.141170 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:31 crc kubenswrapper[4710]: E1128 17:00:31.142295 4710 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 28 17:00:31 crc kubenswrapper[4710]: E1128 17:00:31.142802 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.142855 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.142833 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:31 crc kubenswrapper[4710]: E1128 17:00:31.143045 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:31 crc kubenswrapper[4710]: E1128 17:00:31.143172 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:31 crc kubenswrapper[4710]: E1128 17:00:31.143280 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:31 crc kubenswrapper[4710]: E1128 17:00:31.212907 4710 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.744390 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/1.log" Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.744950 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/0.log" Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.744998 4710 generic.go:334] "Generic (PLEG): container finished" podID="b2ae360a-eba6-4e76-9942-83f5c21f3877" containerID="f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c" exitCode=1 Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.745033 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2j8nb" event={"ID":"b2ae360a-eba6-4e76-9942-83f5c21f3877","Type":"ContainerDied","Data":"f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c"} Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.745073 4710 scope.go:117] "RemoveContainer" containerID="464388c979ad0526273bb62aa1ae53a671fc0d61272fba0ef4f8f5a5edf3fcd7" Nov 28 17:00:31 crc kubenswrapper[4710]: I1128 17:00:31.745442 4710 scope.go:117] "RemoveContainer" containerID="f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c" Nov 28 17:00:31 crc kubenswrapper[4710]: E1128 17:00:31.745591 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2j8nb_openshift-multus(b2ae360a-eba6-4e76-9942-83f5c21f3877)\"" pod="openshift-multus/multus-2j8nb" podUID="b2ae360a-eba6-4e76-9942-83f5c21f3877" Nov 28 17:00:32 crc kubenswrapper[4710]: I1128 17:00:32.750989 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/1.log" Nov 28 17:00:33 crc kubenswrapper[4710]: I1128 17:00:33.140488 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:33 crc kubenswrapper[4710]: I1128 17:00:33.140573 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:33 crc kubenswrapper[4710]: I1128 17:00:33.140571 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:33 crc kubenswrapper[4710]: I1128 17:00:33.140648 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:33 crc kubenswrapper[4710]: E1128 17:00:33.140689 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:33 crc kubenswrapper[4710]: E1128 17:00:33.140881 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:33 crc kubenswrapper[4710]: E1128 17:00:33.140975 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:33 crc kubenswrapper[4710]: E1128 17:00:33.141044 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:35 crc kubenswrapper[4710]: I1128 17:00:35.141411 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:35 crc kubenswrapper[4710]: I1128 17:00:35.141800 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:35 crc kubenswrapper[4710]: I1128 17:00:35.141844 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:35 crc kubenswrapper[4710]: E1128 17:00:35.141997 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:35 crc kubenswrapper[4710]: E1128 17:00:35.142195 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:35 crc kubenswrapper[4710]: I1128 17:00:35.141961 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:35 crc kubenswrapper[4710]: I1128 17:00:35.142306 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:00:35 crc kubenswrapper[4710]: E1128 17:00:35.142384 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:35 crc kubenswrapper[4710]: E1128 17:00:35.142476 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-mzbq9_openshift-ovn-kubernetes(bcf34ad7-9bed-49eb-ad10-20bc5825292a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" Nov 28 17:00:35 crc kubenswrapper[4710]: E1128 17:00:35.142476 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:36 crc kubenswrapper[4710]: E1128 17:00:36.213921 4710 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:37 crc kubenswrapper[4710]: I1128 17:00:37.141319 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:37 crc kubenswrapper[4710]: I1128 17:00:37.141401 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:37 crc kubenswrapper[4710]: E1128 17:00:37.141457 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:37 crc kubenswrapper[4710]: E1128 17:00:37.141576 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:37 crc kubenswrapper[4710]: I1128 17:00:37.141650 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:37 crc kubenswrapper[4710]: E1128 17:00:37.141745 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:37 crc kubenswrapper[4710]: I1128 17:00:37.141811 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:37 crc kubenswrapper[4710]: E1128 17:00:37.141954 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:39 crc kubenswrapper[4710]: I1128 17:00:39.140624 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:39 crc kubenswrapper[4710]: I1128 17:00:39.140691 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:39 crc kubenswrapper[4710]: I1128 17:00:39.140658 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:39 crc kubenswrapper[4710]: E1128 17:00:39.140911 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:39 crc kubenswrapper[4710]: I1128 17:00:39.140980 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:39 crc kubenswrapper[4710]: E1128 17:00:39.141150 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:39 crc kubenswrapper[4710]: E1128 17:00:39.141293 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:39 crc kubenswrapper[4710]: E1128 17:00:39.141424 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:41 crc kubenswrapper[4710]: I1128 17:00:41.141235 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:41 crc kubenswrapper[4710]: I1128 17:00:41.141249 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:41 crc kubenswrapper[4710]: I1128 17:00:41.141331 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:41 crc kubenswrapper[4710]: I1128 17:00:41.141338 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:41 crc kubenswrapper[4710]: E1128 17:00:41.143247 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:41 crc kubenswrapper[4710]: E1128 17:00:41.143399 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:41 crc kubenswrapper[4710]: E1128 17:00:41.143477 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:41 crc kubenswrapper[4710]: E1128 17:00:41.143579 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:41 crc kubenswrapper[4710]: E1128 17:00:41.216005 4710 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:43 crc kubenswrapper[4710]: I1128 17:00:43.141341 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:43 crc kubenswrapper[4710]: I1128 17:00:43.141435 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:43 crc kubenswrapper[4710]: E1128 17:00:43.141497 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:43 crc kubenswrapper[4710]: I1128 17:00:43.141358 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:43 crc kubenswrapper[4710]: E1128 17:00:43.141583 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:43 crc kubenswrapper[4710]: E1128 17:00:43.141682 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:43 crc kubenswrapper[4710]: I1128 17:00:43.141749 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:43 crc kubenswrapper[4710]: E1128 17:00:43.141912 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:45 crc kubenswrapper[4710]: I1128 17:00:45.141235 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:45 crc kubenswrapper[4710]: I1128 17:00:45.141502 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:45 crc kubenswrapper[4710]: E1128 17:00:45.141614 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:45 crc kubenswrapper[4710]: I1128 17:00:45.141662 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:45 crc kubenswrapper[4710]: I1128 17:00:45.141637 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:45 crc kubenswrapper[4710]: E1128 17:00:45.141927 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:45 crc kubenswrapper[4710]: E1128 17:00:45.142029 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:45 crc kubenswrapper[4710]: E1128 17:00:45.142145 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:46 crc kubenswrapper[4710]: E1128 17:00:46.217266 4710 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:47 crc kubenswrapper[4710]: I1128 17:00:47.141228 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:47 crc kubenswrapper[4710]: I1128 17:00:47.141339 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:47 crc kubenswrapper[4710]: E1128 17:00:47.141445 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:47 crc kubenswrapper[4710]: I1128 17:00:47.141636 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:47 crc kubenswrapper[4710]: E1128 17:00:47.141639 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:47 crc kubenswrapper[4710]: I1128 17:00:47.141677 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:47 crc kubenswrapper[4710]: E1128 17:00:47.142027 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:47 crc kubenswrapper[4710]: E1128 17:00:47.142140 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:47 crc kubenswrapper[4710]: I1128 17:00:47.142236 4710 scope.go:117] "RemoveContainer" containerID="f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c" Nov 28 17:00:47 crc kubenswrapper[4710]: I1128 17:00:47.813652 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/1.log" Nov 28 17:00:47 crc kubenswrapper[4710]: I1128 17:00:47.813731 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2j8nb" event={"ID":"b2ae360a-eba6-4e76-9942-83f5c21f3877","Type":"ContainerStarted","Data":"a629b14c6ba490c00394b27559807625366fd25664c19466b47c4835e45f6415"} Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.141082 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.141161 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:49 crc kubenswrapper[4710]: E1128 17:00:49.141208 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.141266 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.141640 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:49 crc kubenswrapper[4710]: E1128 17:00:49.141717 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:49 crc kubenswrapper[4710]: E1128 17:00:49.141792 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:49 crc kubenswrapper[4710]: E1128 17:00:49.141877 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.142329 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.823035 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/3.log" Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.826678 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerStarted","Data":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.827217 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.937881 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podStartSLOduration=114.937850917 podStartE2EDuration="1m54.937850917s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:00:49.854127251 +0000 UTC m=+139.112427296" watchObservedRunningTime="2025-11-28 17:00:49.937850917 +0000 UTC m=+139.196150962" Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.938832 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pwn66"] Nov 28 17:00:49 crc kubenswrapper[4710]: I1128 17:00:49.938979 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:49 crc kubenswrapper[4710]: E1128 17:00:49.939095 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:51 crc kubenswrapper[4710]: I1128 17:00:51.140668 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:51 crc kubenswrapper[4710]: I1128 17:00:51.140716 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:51 crc kubenswrapper[4710]: E1128 17:00:51.140915 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:51 crc kubenswrapper[4710]: E1128 17:00:51.142859 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:51 crc kubenswrapper[4710]: I1128 17:00:51.142910 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:51 crc kubenswrapper[4710]: E1128 17:00:51.143029 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:51 crc kubenswrapper[4710]: E1128 17:00:51.219074 4710 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:00:52 crc kubenswrapper[4710]: I1128 17:00:52.140515 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:52 crc kubenswrapper[4710]: E1128 17:00:52.140948 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:53 crc kubenswrapper[4710]: I1128 17:00:53.140947 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:53 crc kubenswrapper[4710]: I1128 17:00:53.141017 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:53 crc kubenswrapper[4710]: I1128 17:00:53.141116 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:53 crc kubenswrapper[4710]: E1128 17:00:53.141106 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:53 crc kubenswrapper[4710]: E1128 17:00:53.141230 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:53 crc kubenswrapper[4710]: E1128 17:00:53.141295 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:54 crc kubenswrapper[4710]: I1128 17:00:54.140837 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:54 crc kubenswrapper[4710]: E1128 17:00:54.141039 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:55 crc kubenswrapper[4710]: I1128 17:00:55.141685 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:55 crc kubenswrapper[4710]: I1128 17:00:55.141805 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:55 crc kubenswrapper[4710]: E1128 17:00:55.141889 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:00:55 crc kubenswrapper[4710]: E1128 17:00:55.141969 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:00:55 crc kubenswrapper[4710]: I1128 17:00:55.142078 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:55 crc kubenswrapper[4710]: E1128 17:00:55.142202 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:00:56 crc kubenswrapper[4710]: I1128 17:00:56.140956 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:56 crc kubenswrapper[4710]: E1128 17:00:56.141147 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-pwn66" podUID="a6cf6922-30b9-4011-a998-255a33c143df" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.049231 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049378 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:02:59.049358991 +0000 UTC m=+268.307659036 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.049518 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.049565 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049723 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049747 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049822 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049834 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049847 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049856 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049867 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049911 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 17:02:59.049900788 +0000 UTC m=+268.308200833 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049936 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 17:02:59.049919329 +0000 UTC m=+268.308219404 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.049968 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 17:02:59.04995713 +0000 UTC m=+268.308257205 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.049989 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.050060 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.050183 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 17:00:57 crc kubenswrapper[4710]: E1128 17:00:57.050235 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 17:02:59.050222069 +0000 UTC m=+268.308522154 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.140553 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.140594 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.140861 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.142921 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.142960 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.143109 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 17:00:57 crc kubenswrapper[4710]: I1128 17:00:57.144252 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 17:00:58 crc kubenswrapper[4710]: I1128 17:00:58.140479 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:00:58 crc kubenswrapper[4710]: I1128 17:00:58.143674 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 17:00:58 crc kubenswrapper[4710]: I1128 17:00:58.144888 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 17:01:04 crc kubenswrapper[4710]: I1128 17:01:04.994697 4710 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.043846 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4bldc"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.044720 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.059682 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fdmdc"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.059892 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.059953 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.060147 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.060177 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.060466 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.061550 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.063722 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.064143 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.066518 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.067289 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.068810 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z5klw"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.069621 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.080452 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ncq9p"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.080914 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.081360 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.081848 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.082927 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.083346 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.092962 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-282rn"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.094148 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.107116 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.107195 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.110338 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.110478 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.111672 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.111948 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.112018 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.112082 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.112346 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.112647 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.113081 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v7m54"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.113351 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.113777 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.114176 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.114499 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-282rn" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.115057 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.117090 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.117383 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.117428 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.117631 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.117790 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.117960 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.118046 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.119461 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.120538 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.122091 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.122350 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.122594 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.122839 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.123048 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.123285 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.123471 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.123895 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.124280 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.124844 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.124993 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.125150 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.125224 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.125922 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.126075 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.128584 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rtzhv"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.130971 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.128668 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.126537 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.126585 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.126630 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127085 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127133 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127188 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127243 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127276 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127314 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127345 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127377 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127408 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127565 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127602 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.127648 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.147217 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.147442 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.147584 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.147745 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.147857 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.147935 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148003 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148100 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148172 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148195 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148234 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148274 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148292 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148367 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148459 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148539 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148625 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148678 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148778 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.148957 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.149103 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.149267 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.149410 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.151634 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.152001 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.153391 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.153592 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.153716 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.153966 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154157 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154316 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154465 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154489 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154616 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-client-ca\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154640 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjmpp\" (UniqueName: \"kubernetes.io/projected/bca7c24d-4634-4d32-a234-2c33cc0bf842-kube-api-access-wjmpp\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154667 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-encryption-config\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154682 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-client-ca\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154702 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6fd0e719-abfd-4656-bacb-f003d9cee909-node-pullsecrets\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154719 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.154736 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bca7c24d-4634-4d32-a234-2c33cc0bf842-serving-cert\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.169245 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.169493 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-image-import-ca\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.169553 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.169589 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-audit\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.169622 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn8vv\" (UniqueName: \"kubernetes.io/projected/411f84b6-6676-4b0a-957c-eff49570cc88-kube-api-access-cn8vv\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.169651 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d933366c-bee9-4d19-8152-b4401d886b35-serving-cert\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.172306 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-z7cgp"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.173825 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-stcdf"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.175626 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.176118 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.177519 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.182989 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-serving-cert\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.183074 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.183914 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.184397 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv6cg\" (UniqueName: \"kubernetes.io/projected/6fd0e719-abfd-4656-bacb-f003d9cee909-kube-api-access-tv6cg\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.184493 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.184525 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsrtx\" (UniqueName: \"kubernetes.io/projected/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-kube-api-access-zsrtx\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187394 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-audit-dir\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187427 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp6wc\" (UniqueName: \"kubernetes.io/projected/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-kube-api-access-bp6wc\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187450 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b390d2f-0343-4f77-a3a3-196d446347cb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187470 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-config\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187489 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b390d2f-0343-4f77-a3a3-196d446347cb-images\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187504 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-etcd-serving-ca\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187525 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fd0e719-abfd-4656-bacb-f003d9cee909-audit-dir\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187543 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187566 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-service-ca-bundle\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187583 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b390d2f-0343-4f77-a3a3-196d446347cb-config\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187602 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-config\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.187628 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-etcd-client\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.195467 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198061 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198322 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.188635 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/636f3f84-f74c-44ab-b740-9919994c2a3b-auth-proxy-config\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198515 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-audit-policies\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198534 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5grm\" (UniqueName: \"kubernetes.io/projected/636f3f84-f74c-44ab-b740-9919994c2a3b-kube-api-access-x5grm\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198549 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-config\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198564 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-etcd-client\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198580 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-config\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198594 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwxtq\" (UniqueName: \"kubernetes.io/projected/8b390d2f-0343-4f77-a3a3-196d446347cb-kube-api-access-dwxtq\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198611 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/636f3f84-f74c-44ab-b740-9919994c2a3b-machine-approver-tls\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198637 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636f3f84-f74c-44ab-b740-9919994c2a3b-config\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198654 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-serving-cert\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198671 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198685 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c2t2\" (UniqueName: \"kubernetes.io/projected/d933366c-bee9-4d19-8152-b4401d886b35-kube-api-access-6c2t2\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198713 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198729 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198744 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198774 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-encryption-config\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198803 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/411f84b6-6676-4b0a-957c-eff49570cc88-serving-cert\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.198897 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n82pb"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.199425 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.199743 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.199977 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.201205 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.201360 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.201589 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.201734 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.203471 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.204357 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.204524 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.205138 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.206460 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.207298 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.207642 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.208357 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.208613 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.209085 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.209346 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.209733 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.212005 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.212207 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.213026 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.213280 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.213797 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.214021 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.214023 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.214706 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8thtd"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.215413 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.215770 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.215824 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.216213 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.216709 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.219365 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-rfr7v"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.220075 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.221599 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.222205 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.223326 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.224280 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.224940 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.225381 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.236469 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.237211 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbg64"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.238351 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.240857 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.247988 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p7pp6"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.248072 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.248352 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.248888 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.249204 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.249348 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.249567 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.250141 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.251007 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.251662 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.252470 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xpfn7"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.252919 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.254669 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lmtkf"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.257524 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.260666 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4bldc"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.263963 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fdmdc"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.263994 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z5klw"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.264888 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.266042 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-282rn"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.267721 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.267886 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ncq9p"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.268628 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.270974 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n82pb"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.271813 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.271967 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-stcdf"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.272980 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.273885 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.274857 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.275711 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z7cgp"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.277077 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.277642 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rtzhv"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.278562 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.279516 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.280581 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v7m54"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.281827 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.282507 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbg64"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.283622 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.284554 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8thtd"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.286593 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hf7ls"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.287490 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-79pb2"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.287951 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-79pb2" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.288407 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.288737 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.289661 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.290525 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.290842 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.291710 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.292447 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.293495 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.294798 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.295991 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hf7ls"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.297388 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.298047 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p7pp6"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299275 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299288 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-audit\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299435 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1fe1016-39da-42d0-9d25-818227699166-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299511 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnpfh\" (UniqueName: \"kubernetes.io/projected/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-kube-api-access-pnpfh\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299587 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn8vv\" (UniqueName: \"kubernetes.io/projected/411f84b6-6676-4b0a-957c-eff49570cc88-kube-api-access-cn8vv\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299662 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d933366c-bee9-4d19-8152-b4401d886b35-serving-cert\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299741 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c469db-4972-46bb-8960-24891a1010b3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299832 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-serving-cert\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299909 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv6cg\" (UniqueName: \"kubernetes.io/projected/6fd0e719-abfd-4656-bacb-f003d9cee909-kube-api-access-tv6cg\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.299979 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsrtx\" (UniqueName: \"kubernetes.io/projected/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-kube-api-access-zsrtx\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300060 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsrjz\" (UniqueName: \"kubernetes.io/projected/30c469db-4972-46bb-8960-24891a1010b3-kube-api-access-bsrjz\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300141 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-audit-dir\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300203 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xpfn7"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300217 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp6wc\" (UniqueName: \"kubernetes.io/projected/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-kube-api-access-bp6wc\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300301 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b390d2f-0343-4f77-a3a3-196d446347cb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300323 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-config\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300346 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300370 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b390d2f-0343-4f77-a3a3-196d446347cb-images\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300395 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-etcd-serving-ca\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300414 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fd0e719-abfd-4656-bacb-f003d9cee909-audit-dir\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300430 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-config\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300449 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300467 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300483 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-service-ca-bundle\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300499 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b390d2f-0343-4f77-a3a3-196d446347cb-config\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300519 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-etcd-client\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300534 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300550 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7nwv\" (UniqueName: \"kubernetes.io/projected/e1fe1016-39da-42d0-9d25-818227699166-kube-api-access-w7nwv\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300566 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c469db-4972-46bb-8960-24891a1010b3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300585 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/636f3f84-f74c-44ab-b740-9919994c2a3b-auth-proxy-config\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300601 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-audit-policies\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300616 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-config\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300633 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5grm\" (UniqueName: \"kubernetes.io/projected/636f3f84-f74c-44ab-b740-9919994c2a3b-kube-api-access-x5grm\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300649 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwxtq\" (UniqueName: \"kubernetes.io/projected/8b390d2f-0343-4f77-a3a3-196d446347cb-kube-api-access-dwxtq\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300665 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-etcd-client\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300679 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-config\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300694 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/636f3f84-f74c-44ab-b740-9919994c2a3b-machine-approver-tls\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300710 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-service-ca\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300724 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1fe1016-39da-42d0-9d25-818227699166-metrics-tls\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300768 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300785 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-trusted-ca-bundle\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300801 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636f3f84-f74c-44ab-b740-9919994c2a3b-config\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300818 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300836 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-serving-cert\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300865 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300881 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c2t2\" (UniqueName: \"kubernetes.io/projected/d933366c-bee9-4d19-8152-b4401d886b35-kube-api-access-6c2t2\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300898 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1fe1016-39da-42d0-9d25-818227699166-trusted-ca\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300932 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300947 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300981 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-config\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301001 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301019 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-policies\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301034 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301050 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301067 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-dir\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301082 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301100 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-encryption-config\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301117 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/411f84b6-6676-4b0a-957c-eff49570cc88-serving-cert\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301137 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjmpp\" (UniqueName: \"kubernetes.io/projected/bca7c24d-4634-4d32-a234-2c33cc0bf842-kube-api-access-wjmpp\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301151 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-client-ca\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301167 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301185 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301202 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301217 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh2g4\" (UniqueName: \"kubernetes.io/projected/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-kube-api-access-bh2g4\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301240 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-encryption-config\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301256 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-client-ca\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301271 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6fd0e719-abfd-4656-bacb-f003d9cee909-node-pullsecrets\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301287 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-oauth-config\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301303 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301318 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bca7c24d-4634-4d32-a234-2c33cc0bf842-serving-cert\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301341 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-image-import-ca\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301355 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301370 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-serving-cert\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.301384 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-oauth-serving-cert\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.302449 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.303251 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-audit-dir\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.300176 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-audit\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.304313 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.305527 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/636f3f84-f74c-44ab-b740-9919994c2a3b-auth-proxy-config\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.306055 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-service-ca-bundle\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.306135 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-client-ca\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.306569 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.306576 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.307990 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-audit-policies\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.308125 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-config\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.308312 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/636f3f84-f74c-44ab-b740-9919994c2a3b-config\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.308594 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6fd0e719-abfd-4656-bacb-f003d9cee909-node-pullsecrets\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.309134 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.309213 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-config\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.309319 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-config\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.309902 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.309916 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/636f3f84-f74c-44ab-b740-9919994c2a3b-machine-approver-tls\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.310882 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-encryption-config\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.310963 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8b390d2f-0343-4f77-a3a3-196d446347cb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.311690 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bca7c24d-4634-4d32-a234-2c33cc0bf842-config\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.312181 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8b390d2f-0343-4f77-a3a3-196d446347cb-images\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.312565 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-etcd-serving-ca\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.313134 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-etcd-client\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.313259 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-client-ca\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.313300 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fd0e719-abfd-4656-bacb-f003d9cee909-audit-dir\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.313911 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-config\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.314610 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-image-import-ca\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.314644 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b390d2f-0343-4f77-a3a3-196d446347cb-config\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.314989 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.315001 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd0e719-abfd-4656-bacb-f003d9cee909-trusted-ca-bundle\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.315643 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/411f84b6-6676-4b0a-957c-eff49570cc88-serving-cert\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.315911 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd0e719-abfd-4656-bacb-f003d9cee909-serving-cert\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.316401 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-etcd-client\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.316504 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.316530 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-encryption-config\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.319860 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d933366c-bee9-4d19-8152-b4401d886b35-serving-cert\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.319917 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bca7c24d-4634-4d32-a234-2c33cc0bf842-serving-cert\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.320496 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-79pb2"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.322961 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rk9hm"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.323675 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.324455 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rk9hm"] Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.327532 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.329029 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-serving-cert\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.346713 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.372952 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.387630 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402204 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402248 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh2g4\" (UniqueName: \"kubernetes.io/projected/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-kube-api-access-bh2g4\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402293 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402312 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402328 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-oauth-config\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402381 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-serving-cert\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402403 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-oauth-serving-cert\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402451 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1fe1016-39da-42d0-9d25-818227699166-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402474 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnpfh\" (UniqueName: \"kubernetes.io/projected/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-kube-api-access-pnpfh\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402529 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c469db-4972-46bb-8960-24891a1010b3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402570 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsrjz\" (UniqueName: \"kubernetes.io/projected/30c469db-4972-46bb-8960-24891a1010b3-kube-api-access-bsrjz\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402629 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402656 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402705 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402727 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c469db-4972-46bb-8960-24891a1010b3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402794 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7nwv\" (UniqueName: \"kubernetes.io/projected/e1fe1016-39da-42d0-9d25-818227699166-kube-api-access-w7nwv\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402857 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1fe1016-39da-42d0-9d25-818227699166-metrics-tls\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402887 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-service-ca\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402945 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.402979 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403044 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-trusted-ca-bundle\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403114 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1fe1016-39da-42d0-9d25-818227699166-trusted-ca\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403145 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403195 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403219 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-config\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403230 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-oauth-serving-cert\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403262 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-policies\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403292 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-dir\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403315 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.403608 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.404189 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.404411 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30c469db-4972-46bb-8960-24891a1010b3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.405457 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-config\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.405825 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-dir\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.408179 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.406547 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.406633 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30c469db-4972-46bb-8960-24891a1010b3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.405862 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1fe1016-39da-42d0-9d25-818227699166-trusted-ca\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.406802 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-trusted-ca-bundle\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.407467 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e1fe1016-39da-42d0-9d25-818227699166-metrics-tls\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.407143 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.407827 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.407838 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-service-ca\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.409228 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-policies\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.409875 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.410049 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.410515 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.412124 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.414652 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-serving-cert\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.415436 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.421287 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-oauth-config\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.427690 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.446933 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.467078 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.487014 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.507202 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.528135 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.567520 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.587582 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.608047 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.628112 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.647381 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.667547 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.687683 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.707709 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.728508 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.747293 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.768099 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.787795 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.807994 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.827369 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.847838 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.867966 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.888365 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.908148 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.928994 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.948615 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.967887 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 17:01:05 crc kubenswrapper[4710]: I1128 17:01:05.987935 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.008070 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.027693 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.048179 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.067915 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.087810 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.109429 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.128199 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.148332 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.168985 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.186932 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.207232 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.225727 4710 request.go:700] Waited for 1.001191935s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0 Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.227822 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.248135 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.267108 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.287812 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.308491 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.350884 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.351208 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.368096 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.388797 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.407740 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.428657 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.474277 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.474847 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.487366 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.508026 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.527660 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.548231 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.567576 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.587936 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.607968 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.628645 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.648306 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.667937 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.688218 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.709193 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.727788 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.748505 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.768614 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.788301 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.809199 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.828722 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.848823 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.869687 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.888805 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.908058 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.928432 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.947850 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.967728 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 17:01:06 crc kubenswrapper[4710]: I1128 17:01:06.988129 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.008396 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.028313 4710 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.048304 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.087381 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp6wc\" (UniqueName: \"kubernetes.io/projected/110d7e0f-d9ae-4b26-8846-685f3c4bb6fc-kube-api-access-bp6wc\") pod \"openshift-apiserver-operator-796bbdcf4f-7z6nl\" (UID: \"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.104899 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv6cg\" (UniqueName: \"kubernetes.io/projected/6fd0e719-abfd-4656-bacb-f003d9cee909-kube-api-access-tv6cg\") pod \"apiserver-76f77b778f-z5klw\" (UID: \"6fd0e719-abfd-4656-bacb-f003d9cee909\") " pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.124221 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsrtx\" (UniqueName: \"kubernetes.io/projected/1f2d0a06-b022-4bb6-9e49-b601359f5e4e-kube-api-access-zsrtx\") pod \"apiserver-7bbb656c7d-kq6jz\" (UID: \"1f2d0a06-b022-4bb6-9e49-b601359f5e4e\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.145095 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn8vv\" (UniqueName: \"kubernetes.io/projected/411f84b6-6676-4b0a-957c-eff49570cc88-kube-api-access-cn8vv\") pod \"controller-manager-879f6c89f-fdmdc\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.168918 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c2t2\" (UniqueName: \"kubernetes.io/projected/d933366c-bee9-4d19-8152-b4401d886b35-kube-api-access-6c2t2\") pod \"route-controller-manager-6576b87f9c-kr9gw\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.194177 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5grm\" (UniqueName: \"kubernetes.io/projected/636f3f84-f74c-44ab-b740-9919994c2a3b-kube-api-access-x5grm\") pod \"machine-approver-56656f9798-9mxpx\" (UID: \"636f3f84-f74c-44ab-b740-9919994c2a3b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.205605 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwxtq\" (UniqueName: \"kubernetes.io/projected/8b390d2f-0343-4f77-a3a3-196d446347cb-kube-api-access-dwxtq\") pod \"machine-api-operator-5694c8668f-4bldc\" (UID: \"8b390d2f-0343-4f77-a3a3-196d446347cb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.217472 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.224724 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjmpp\" (UniqueName: \"kubernetes.io/projected/bca7c24d-4634-4d32-a234-2c33cc0bf842-kube-api-access-wjmpp\") pod \"authentication-operator-69f744f599-ncq9p\" (UID: \"bca7c24d-4634-4d32-a234-2c33cc0bf842\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.226312 4710 request.go:700] Waited for 1.902415102s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.226516 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.229148 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.239978 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.248542 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 17:01:07 crc kubenswrapper[4710]: W1128 17:01:07.260956 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod636f3f84_f74c_44ab_b740_9919994c2a3b.slice/crio-b1e26aed4f6bad4cabb36a07dd687bb485123744dd33eb035dc53a765db9b971 WatchSource:0}: Error finding container b1e26aed4f6bad4cabb36a07dd687bb485123744dd33eb035dc53a765db9b971: Status 404 returned error can't find the container with id b1e26aed4f6bad4cabb36a07dd687bb485123744dd33eb035dc53a765db9b971 Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.267186 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.272117 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.290387 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.322844 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.323189 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.332808 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh2g4\" (UniqueName: \"kubernetes.io/projected/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-kube-api-access-bh2g4\") pod \"oauth-openshift-558db77b4-v7m54\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.358300 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.366537 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsrjz\" (UniqueName: \"kubernetes.io/projected/30c469db-4972-46bb-8960-24891a1010b3-kube-api-access-bsrjz\") pod \"openshift-controller-manager-operator-756b6f6bc6-nfrwd\" (UID: \"30c469db-4972-46bb-8960-24891a1010b3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.371698 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnpfh\" (UniqueName: \"kubernetes.io/projected/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-kube-api-access-pnpfh\") pod \"console-f9d7485db-z7cgp\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.392986 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.393119 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.394281 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.406439 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7nwv\" (UniqueName: \"kubernetes.io/projected/e1fe1016-39da-42d0-9d25-818227699166-kube-api-access-w7nwv\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.406870 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1fe1016-39da-42d0-9d25-818227699166-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nxlw9\" (UID: \"e1fe1016-39da-42d0-9d25-818227699166\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.436488 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8vn\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-kube-api-access-hx8vn\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.436612 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-smsqk\" (UID: \"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.436684 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-serving-cert\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.436735 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfnfv\" (UniqueName: \"kubernetes.io/projected/dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68-kube-api-access-tfnfv\") pod \"cluster-samples-operator-665b6dd947-smsqk\" (UID: \"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.436906 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.437026 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-registry-tls\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.437298 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" Nov 28 17:01:07 crc kubenswrapper[4710]: E1128 17:01:07.437350 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:07.937331591 +0000 UTC m=+157.195631636 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.437544 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-bound-sa-token\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.438884 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckprd\" (UniqueName: \"kubernetes.io/projected/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-kube-api-access-ckprd\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.438926 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803dde31-2ca7-49ad-9db2-7b98dd682b99-config\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439014 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803dde31-2ca7-49ad-9db2-7b98dd682b99-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439092 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-trusted-ca\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439132 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c6e200-b005-41fe-902b-1f5fc2f9039d-serving-cert\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439160 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw9q8\" (UniqueName: \"kubernetes.io/projected/1688c24e-0457-4929-a3c8-5feb624c8b11-kube-api-access-cw9q8\") pod \"downloads-7954f5f757-282rn\" (UID: \"1688c24e-0457-4929-a3c8-5feb624c8b11\") " pod="openshift-console/downloads-7954f5f757-282rn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439223 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48374daa-0613-4fe0-94a5-311e48a3979f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439254 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c6e200-b005-41fe-902b-1f5fc2f9039d-config\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439283 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38c6e200-b005-41fe-902b-1f5fc2f9039d-trusted-ca\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439300 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439376 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-registry-certificates\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439414 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnd8m\" (UniqueName: \"kubernetes.io/projected/38c6e200-b005-41fe-902b-1f5fc2f9039d-kube-api-access-qnd8m\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439433 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803dde31-2ca7-49ad-9db2-7b98dd682b99-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.439474 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48374daa-0613-4fe0-94a5-311e48a3979f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.482096 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.540589 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.540799 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c920bc9-abe9-48c5-8124-f15727832b2e-secret-volume\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.540921 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c864ba87-2e40-4494-a652-20c34119e1c3-srv-cert\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.540989 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efc1a80b-89db-4363-a441-b02ed373b2c7-config\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541013 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lknpm\" (UniqueName: \"kubernetes.io/projected/2c67b6df-5032-47d4-b3d9-c98e925a80b1-kube-api-access-lknpm\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541033 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af58f5bc-2ecb-49c9-91e5-dca036a205ef-metrics-tls\") pod \"dns-operator-744455d44c-n82pb\" (UID: \"af58f5bc-2ecb-49c9-91e5-dca036a205ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541073 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803dde31-2ca7-49ad-9db2-7b98dd682b99-config\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541096 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fxh4\" (UniqueName: \"kubernetes.io/projected/4b76130c-96ae-4153-b99c-b7e938e8b71c-kube-api-access-8fxh4\") pod \"migrator-59844c95c7-2mtxd\" (UID: \"4b76130c-96ae-4153-b99c-b7e938e8b71c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541128 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvk6b\" (UniqueName: \"kubernetes.io/projected/93f56c4d-2217-41d4-82dc-aef9c5b5096e-kube-api-access-wvk6b\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541149 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0e6a891c-4066-434e-8d84-ed9038be6f2f-tmpfs\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541170 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803dde31-2ca7-49ad-9db2-7b98dd682b99-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541194 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-config-volume\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541216 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k7z7\" (UniqueName: \"kubernetes.io/projected/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-kube-api-access-2k7z7\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541238 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/673012e1-2884-444d-80c8-a2007d1ecb96-cert\") pod \"ingress-canary-79pb2\" (UID: \"673012e1-2884-444d-80c8-a2007d1ecb96\") " pod="openshift-ingress-canary/ingress-canary-79pb2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541273 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw9q8\" (UniqueName: \"kubernetes.io/projected/1688c24e-0457-4929-a3c8-5feb624c8b11-kube-api-access-cw9q8\") pod \"downloads-7954f5f757-282rn\" (UID: \"1688c24e-0457-4929-a3c8-5feb624c8b11\") " pod="openshift-console/downloads-7954f5f757-282rn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541294 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grrhv\" (UniqueName: \"kubernetes.io/projected/e65664f4-d101-4115-8bf7-751bb2276527-kube-api-access-grrhv\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541318 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541376 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c6e200-b005-41fe-902b-1f5fc2f9039d-config\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541398 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541493 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-mountpoint-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541521 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5zv\" (UniqueName: \"kubernetes.io/projected/9c920bc9-abe9-48c5-8124-f15727832b2e-kube-api-access-jz5zv\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541634 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19d00c4f-97cb-47db-abeb-2b29db7e427a-profile-collector-cert\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541662 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-registration-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541684 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541743 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nxwv\" (UniqueName: \"kubernetes.io/projected/4e952e15-9cb9-491e-b6cb-afd314e72291-kube-api-access-8nxwv\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541802 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-metrics-tls\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541836 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4e952e15-9cb9-491e-b6cb-afd314e72291-node-bootstrap-token\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541886 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efc1a80b-89db-4363-a441-b02ed373b2c7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541908 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-etcd-ca\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541950 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7mt2\" (UniqueName: \"kubernetes.io/projected/4c21068e-0ce0-4a6e-b41d-985df443a6a7-kube-api-access-t7mt2\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541976 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnd8m\" (UniqueName: \"kubernetes.io/projected/38c6e200-b005-41fe-902b-1f5fc2f9039d-kube-api-access-qnd8m\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.541994 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803dde31-2ca7-49ad-9db2-7b98dd682b99-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542041 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48374daa-0613-4fe0-94a5-311e48a3979f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542068 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n97qt\" (UniqueName: \"kubernetes.io/projected/bf59eade-a8ba-4951-ade9-090baf203a1f-kube-api-access-n97qt\") pod \"multus-admission-controller-857f4d67dd-8thtd\" (UID: \"bf59eade-a8ba-4951-ade9-090baf203a1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542121 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e65664f4-d101-4115-8bf7-751bb2276527-etcd-client\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542147 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj5xh\" (UniqueName: \"kubernetes.io/projected/c864ba87-2e40-4494-a652-20c34119e1c3-kube-api-access-rj5xh\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542201 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx8vn\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-kube-api-access-hx8vn\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542219 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-config\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542235 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ncms\" (UniqueName: \"kubernetes.io/projected/19d00c4f-97cb-47db-abeb-2b29db7e427a-kube-api-access-6ncms\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542283 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-smsqk\" (UID: \"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542301 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55c290da-674e-4137-8fa3-97ea8353bf26-images\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542370 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542412 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542448 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfnfv\" (UniqueName: \"kubernetes.io/projected/dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68-kube-api-access-tfnfv\") pod \"cluster-samples-operator-665b6dd947-smsqk\" (UID: \"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542487 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65664f4-d101-4115-8bf7-751bb2276527-serving-cert\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542512 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119dccaa-966c-49ef-8c37-d5cf86e23cf7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cflgb\" (UID: \"119dccaa-966c-49ef-8c37-d5cf86e23cf7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542555 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-registry-tls\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542580 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-bound-sa-token\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542624 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hzkl\" (UniqueName: \"kubernetes.io/projected/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-kube-api-access-9hzkl\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.542652 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62zfb\" (UniqueName: \"kubernetes.io/projected/ca70110e-404a-459f-adde-ca66c6bd8f74-kube-api-access-62zfb\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543182 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e29ad41-712f-4502-bdec-aad915a5cefc-config\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543211 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19d00c4f-97cb-47db-abeb-2b29db7e427a-srv-cert\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543227 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msxvx\" (UniqueName: \"kubernetes.io/projected/55c290da-674e-4137-8fa3-97ea8353bf26-kube-api-access-msxvx\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543262 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-plugins-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543279 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-csi-data-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543295 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-etcd-service-ca\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543311 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e29ad41-712f-4502-bdec-aad915a5cefc-serving-cert\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543464 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckprd\" (UniqueName: \"kubernetes.io/projected/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-kube-api-access-ckprd\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.543911 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38c6e200-b005-41fe-902b-1f5fc2f9039d-config\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.551258 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/803dde31-2ca7-49ad-9db2-7b98dd682b99-config\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.551734 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/803dde31-2ca7-49ad-9db2-7b98dd682b99-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.551728 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqt8\" (UniqueName: \"kubernetes.io/projected/af58f5bc-2ecb-49c9-91e5-dca036a205ef-kube-api-access-bbqt8\") pod \"dns-operator-744455d44c-n82pb\" (UID: \"af58f5bc-2ecb-49c9-91e5-dca036a205ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.551906 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.551981 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c21068e-0ce0-4a6e-b41d-985df443a6a7-service-ca-bundle\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552044 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca70110e-404a-459f-adde-ca66c6bd8f74-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552196 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efc1a80b-89db-4363-a441-b02ed373b2c7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552301 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552467 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7dde429-e84e-48dd-a0dc-1bb66d082748-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d8tl4\" (UID: \"e7dde429-e84e-48dd-a0dc-1bb66d082748\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552503 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48374daa-0613-4fe0-94a5-311e48a3979f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552525 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-default-certificate\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552553 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c864ba87-2e40-4494-a652-20c34119e1c3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552618 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-trusted-ca\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.552681 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzqsq\" (UniqueName: \"kubernetes.io/projected/639a9052-76a6-4248-99cd-4638000730de-kube-api-access-lzqsq\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.554477 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c6e200-b005-41fe-902b-1f5fc2f9039d-serving-cert\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.554572 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55c290da-674e-4137-8fa3-97ea8353bf26-proxy-tls\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.554648 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvwnl\" (UniqueName: \"kubernetes.io/projected/e7dde429-e84e-48dd-a0dc-1bb66d082748-kube-api-access-nvwnl\") pod \"control-plane-machine-set-operator-78cbb6b69f-d8tl4\" (UID: \"e7dde429-e84e-48dd-a0dc-1bb66d082748\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.554712 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-metrics-certs\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.554747 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c920bc9-abe9-48c5-8124-f15727832b2e-config-volume\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.554894 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48374daa-0613-4fe0-94a5-311e48a3979f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.554932 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjvv\" (UniqueName: \"kubernetes.io/projected/119dccaa-966c-49ef-8c37-d5cf86e23cf7-kube-api-access-tdjvv\") pod \"package-server-manager-789f6589d5-cflgb\" (UID: \"119dccaa-966c-49ef-8c37-d5cf86e23cf7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.555264 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38c6e200-b005-41fe-902b-1f5fc2f9039d-trusted-ca\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.555721 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwcsz\" (UniqueName: \"kubernetes.io/projected/3e29ad41-712f-4502-bdec-aad915a5cefc-kube-api-access-kwcsz\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.555782 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556223 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-stats-auth\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556288 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf59eade-a8ba-4951-ade9-090baf203a1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8thtd\" (UID: \"bf59eade-a8ba-4951-ade9-090baf203a1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556364 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-registry-certificates\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556396 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e6a891c-4066-434e-8d84-ed9038be6f2f-webhook-cert\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556428 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhttc\" (UniqueName: \"kubernetes.io/projected/0e6a891c-4066-434e-8d84-ed9038be6f2f-kube-api-access-qhttc\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556461 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2c67b6df-5032-47d4-b3d9-c98e925a80b1-signing-key\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556489 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556614 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e6a891c-4066-434e-8d84-ed9038be6f2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.556644 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.558331 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-smsqk\" (UID: \"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.561961 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4e952e15-9cb9-491e-b6cb-afd314e72291-certs\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.567254 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48374daa-0613-4fe0-94a5-311e48a3979f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.567740 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2c67b6df-5032-47d4-b3d9-c98e925a80b1-signing-cabundle\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.567791 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-socket-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.567833 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca70110e-404a-459f-adde-ca66c6bd8f74-proxy-tls\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.567883 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55c290da-674e-4137-8fa3-97ea8353bf26-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: E1128 17:01:07.567945 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.067920472 +0000 UTC m=+157.326220527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.568058 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57zw\" (UniqueName: \"kubernetes.io/projected/673012e1-2884-444d-80c8-a2007d1ecb96-kube-api-access-n57zw\") pod \"ingress-canary-79pb2\" (UID: \"673012e1-2884-444d-80c8-a2007d1ecb96\") " pod="openshift-ingress-canary/ingress-canary-79pb2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.568090 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.568110 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c55s8\" (UniqueName: \"kubernetes.io/projected/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-kube-api-access-c55s8\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.568225 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-serving-cert\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.569086 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-trusted-ca\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.569182 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.570200 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/38c6e200-b005-41fe-902b-1f5fc2f9039d-trusted-ca\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.572166 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-serving-cert\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.576667 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-registry-certificates\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.577987 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-registry-tls\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.581751 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38c6e200-b005-41fe-902b-1f5fc2f9039d-serving-cert\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.591149 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/803dde31-2ca7-49ad-9db2-7b98dd682b99-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-bvc8s\" (UID: \"803dde31-2ca7-49ad-9db2-7b98dd682b99\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.612989 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnd8m\" (UniqueName: \"kubernetes.io/projected/38c6e200-b005-41fe-902b-1f5fc2f9039d-kube-api-access-qnd8m\") pod \"console-operator-58897d9998-stcdf\" (UID: \"38c6e200-b005-41fe-902b-1f5fc2f9039d\") " pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.632596 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfnfv\" (UniqueName: \"kubernetes.io/projected/dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68-kube-api-access-tfnfv\") pod \"cluster-samples-operator-665b6dd947-smsqk\" (UID: \"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.654876 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx8vn\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-kube-api-access-hx8vn\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.666859 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.667307 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckprd\" (UniqueName: \"kubernetes.io/projected/6e3e3a1c-47ab-4aea-9a12-6323314ca17a-kube-api-access-ckprd\") pod \"openshift-config-operator-7777fb866f-2n4l4\" (UID: \"6e3e3a1c-47ab-4aea-9a12-6323314ca17a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673503 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19d00c4f-97cb-47db-abeb-2b29db7e427a-profile-collector-cert\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673544 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-registration-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673567 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673604 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nxwv\" (UniqueName: \"kubernetes.io/projected/4e952e15-9cb9-491e-b6cb-afd314e72291-kube-api-access-8nxwv\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673626 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-metrics-tls\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673649 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4e952e15-9cb9-491e-b6cb-afd314e72291-node-bootstrap-token\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673669 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efc1a80b-89db-4363-a441-b02ed373b2c7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673692 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-etcd-ca\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673715 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7mt2\" (UniqueName: \"kubernetes.io/projected/4c21068e-0ce0-4a6e-b41d-985df443a6a7-kube-api-access-t7mt2\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673739 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n97qt\" (UniqueName: \"kubernetes.io/projected/bf59eade-a8ba-4951-ade9-090baf203a1f-kube-api-access-n97qt\") pod \"multus-admission-controller-857f4d67dd-8thtd\" (UID: \"bf59eade-a8ba-4951-ade9-090baf203a1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673781 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e65664f4-d101-4115-8bf7-751bb2276527-etcd-client\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673804 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj5xh\" (UniqueName: \"kubernetes.io/projected/c864ba87-2e40-4494-a652-20c34119e1c3-kube-api-access-rj5xh\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673828 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-config\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673850 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ncms\" (UniqueName: \"kubernetes.io/projected/19d00c4f-97cb-47db-abeb-2b29db7e427a-kube-api-access-6ncms\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673875 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55c290da-674e-4137-8fa3-97ea8353bf26-images\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673897 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673923 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673954 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.673978 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65664f4-d101-4115-8bf7-751bb2276527-serving-cert\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674002 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119dccaa-966c-49ef-8c37-d5cf86e23cf7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cflgb\" (UID: \"119dccaa-966c-49ef-8c37-d5cf86e23cf7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674037 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hzkl\" (UniqueName: \"kubernetes.io/projected/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-kube-api-access-9hzkl\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674060 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62zfb\" (UniqueName: \"kubernetes.io/projected/ca70110e-404a-459f-adde-ca66c6bd8f74-kube-api-access-62zfb\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674082 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e29ad41-712f-4502-bdec-aad915a5cefc-config\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674104 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19d00c4f-97cb-47db-abeb-2b29db7e427a-srv-cert\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674126 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-plugins-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674147 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-csi-data-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674167 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-etcd-service-ca\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674192 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msxvx\" (UniqueName: \"kubernetes.io/projected/55c290da-674e-4137-8fa3-97ea8353bf26-kube-api-access-msxvx\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674215 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e29ad41-712f-4502-bdec-aad915a5cefc-serving-cert\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674237 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqt8\" (UniqueName: \"kubernetes.io/projected/af58f5bc-2ecb-49c9-91e5-dca036a205ef-kube-api-access-bbqt8\") pod \"dns-operator-744455d44c-n82pb\" (UID: \"af58f5bc-2ecb-49c9-91e5-dca036a205ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674257 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674277 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c21068e-0ce0-4a6e-b41d-985df443a6a7-service-ca-bundle\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674297 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca70110e-404a-459f-adde-ca66c6bd8f74-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674327 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efc1a80b-89db-4363-a441-b02ed373b2c7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674350 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674372 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7dde429-e84e-48dd-a0dc-1bb66d082748-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d8tl4\" (UID: \"e7dde429-e84e-48dd-a0dc-1bb66d082748\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674395 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-default-certificate\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674415 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c864ba87-2e40-4494-a652-20c34119e1c3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674438 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzqsq\" (UniqueName: \"kubernetes.io/projected/639a9052-76a6-4248-99cd-4638000730de-kube-api-access-lzqsq\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674459 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55c290da-674e-4137-8fa3-97ea8353bf26-proxy-tls\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674480 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvwnl\" (UniqueName: \"kubernetes.io/projected/e7dde429-e84e-48dd-a0dc-1bb66d082748-kube-api-access-nvwnl\") pod \"control-plane-machine-set-operator-78cbb6b69f-d8tl4\" (UID: \"e7dde429-e84e-48dd-a0dc-1bb66d082748\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674499 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-metrics-certs\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674521 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c920bc9-abe9-48c5-8124-f15727832b2e-config-volume\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674545 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjvv\" (UniqueName: \"kubernetes.io/projected/119dccaa-966c-49ef-8c37-d5cf86e23cf7-kube-api-access-tdjvv\") pod \"package-server-manager-789f6589d5-cflgb\" (UID: \"119dccaa-966c-49ef-8c37-d5cf86e23cf7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674570 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwcsz\" (UniqueName: \"kubernetes.io/projected/3e29ad41-712f-4502-bdec-aad915a5cefc-kube-api-access-kwcsz\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674616 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-stats-auth\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674637 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf59eade-a8ba-4951-ade9-090baf203a1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8thtd\" (UID: \"bf59eade-a8ba-4951-ade9-090baf203a1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674658 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e6a891c-4066-434e-8d84-ed9038be6f2f-webhook-cert\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674678 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhttc\" (UniqueName: \"kubernetes.io/projected/0e6a891c-4066-434e-8d84-ed9038be6f2f-kube-api-access-qhttc\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674699 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2c67b6df-5032-47d4-b3d9-c98e925a80b1-signing-key\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674719 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674740 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e6a891c-4066-434e-8d84-ed9038be6f2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674782 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674810 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4e952e15-9cb9-491e-b6cb-afd314e72291-certs\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674842 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2c67b6df-5032-47d4-b3d9-c98e925a80b1-signing-cabundle\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674863 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-socket-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674887 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca70110e-404a-459f-adde-ca66c6bd8f74-proxy-tls\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674917 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55c290da-674e-4137-8fa3-97ea8353bf26-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674939 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n57zw\" (UniqueName: \"kubernetes.io/projected/673012e1-2884-444d-80c8-a2007d1ecb96-kube-api-access-n57zw\") pod \"ingress-canary-79pb2\" (UID: \"673012e1-2884-444d-80c8-a2007d1ecb96\") " pod="openshift-ingress-canary/ingress-canary-79pb2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674964 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.674987 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c55s8\" (UniqueName: \"kubernetes.io/projected/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-kube-api-access-c55s8\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675014 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c920bc9-abe9-48c5-8124-f15727832b2e-secret-volume\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675036 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c864ba87-2e40-4494-a652-20c34119e1c3-srv-cert\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675054 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efc1a80b-89db-4363-a441-b02ed373b2c7-config\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675068 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af58f5bc-2ecb-49c9-91e5-dca036a205ef-metrics-tls\") pod \"dns-operator-744455d44c-n82pb\" (UID: \"af58f5bc-2ecb-49c9-91e5-dca036a205ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675083 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lknpm\" (UniqueName: \"kubernetes.io/projected/2c67b6df-5032-47d4-b3d9-c98e925a80b1-kube-api-access-lknpm\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675099 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fxh4\" (UniqueName: \"kubernetes.io/projected/4b76130c-96ae-4153-b99c-b7e938e8b71c-kube-api-access-8fxh4\") pod \"migrator-59844c95c7-2mtxd\" (UID: \"4b76130c-96ae-4153-b99c-b7e938e8b71c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675119 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvk6b\" (UniqueName: \"kubernetes.io/projected/93f56c4d-2217-41d4-82dc-aef9c5b5096e-kube-api-access-wvk6b\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675134 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0e6a891c-4066-434e-8d84-ed9038be6f2f-tmpfs\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675157 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-config-volume\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675173 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k7z7\" (UniqueName: \"kubernetes.io/projected/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-kube-api-access-2k7z7\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675190 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/673012e1-2884-444d-80c8-a2007d1ecb96-cert\") pod \"ingress-canary-79pb2\" (UID: \"673012e1-2884-444d-80c8-a2007d1ecb96\") " pod="openshift-ingress-canary/ingress-canary-79pb2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675213 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grrhv\" (UniqueName: \"kubernetes.io/projected/e65664f4-d101-4115-8bf7-751bb2276527-kube-api-access-grrhv\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675228 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675245 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675261 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-mountpoint-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.675277 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5zv\" (UniqueName: \"kubernetes.io/projected/9c920bc9-abe9-48c5-8124-f15727832b2e-kube-api-access-jz5zv\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.678308 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/19d00c4f-97cb-47db-abeb-2b29db7e427a-profile-collector-cert\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.678578 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-registration-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.679126 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e65664f4-d101-4115-8bf7-751bb2276527-etcd-client\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.679795 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-config\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.680639 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/55c290da-674e-4137-8fa3-97ea8353bf26-images\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.680843 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.688210 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.690623 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-etcd-ca\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.690747 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e29ad41-712f-4502-bdec-aad915a5cefc-config\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:07 crc kubenswrapper[4710]: E1128 17:01:07.692698 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.192682113 +0000 UTC m=+157.450982158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.697053 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e65664f4-d101-4115-8bf7-751bb2276527-etcd-service-ca\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.699969 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fdmdc"] Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.700702 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.703306 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bf59eade-a8ba-4951-ade9-090baf203a1f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-8thtd\" (UID: \"bf59eade-a8ba-4951-ade9-090baf203a1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.710958 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-plugins-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.711073 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-csi-data-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.712394 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/119dccaa-966c-49ef-8c37-d5cf86e23cf7-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-cflgb\" (UID: \"119dccaa-966c-49ef-8c37-d5cf86e23cf7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.714391 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.715061 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c21068e-0ce0-4a6e-b41d-985df443a6a7-service-ca-bundle\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.716406 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e65664f4-d101-4115-8bf7-751bb2276527-serving-cert\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.716865 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c920bc9-abe9-48c5-8124-f15727832b2e-config-volume\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.718675 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.721504 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0e6a891c-4066-434e-8d84-ed9038be6f2f-tmpfs\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.722924 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.723415 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/55c290da-674e-4137-8fa3-97ea8353bf26-auth-proxy-config\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.723498 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-socket-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.726698 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/4e952e15-9cb9-491e-b6cb-afd314e72291-node-bootstrap-token\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.727097 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-config-volume\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.727294 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.727316 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-metrics-tls\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.727397 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c864ba87-2e40-4494-a652-20c34119e1c3-profile-collector-cert\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.727923 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/19d00c4f-97cb-47db-abeb-2b29db7e427a-srv-cert\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.727981 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/639a9052-76a6-4248-99cd-4638000730de-mountpoint-dir\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.729072 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-stats-auth\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.729228 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/efc1a80b-89db-4363-a441-b02ed373b2c7-config\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.729496 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.730902 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/4e952e15-9cb9-491e-b6cb-afd314e72291-certs\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.731666 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e6a891c-4066-434e-8d84-ed9038be6f2f-apiservice-cert\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.733916 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.734237 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c864ba87-2e40-4494-a652-20c34119e1c3-srv-cert\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.734462 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-default-certificate\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.738217 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl"] Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.738803 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.740051 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw9q8\" (UniqueName: \"kubernetes.io/projected/1688c24e-0457-4929-a3c8-5feb624c8b11-kube-api-access-cw9q8\") pod \"downloads-7954f5f757-282rn\" (UID: \"1688c24e-0457-4929-a3c8-5feb624c8b11\") " pod="openshift-console/downloads-7954f5f757-282rn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.740576 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/55c290da-674e-4137-8fa3-97ea8353bf26-proxy-tls\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.740668 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/af58f5bc-2ecb-49c9-91e5-dca036a205ef-metrics-tls\") pod \"dns-operator-744455d44c-n82pb\" (UID: \"af58f5bc-2ecb-49c9-91e5-dca036a205ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.741489 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ca70110e-404a-459f-adde-ca66c6bd8f74-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.742678 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e29ad41-712f-4502-bdec-aad915a5cefc-serving-cert\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.743305 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e7dde429-e84e-48dd-a0dc-1bb66d082748-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-d8tl4\" (UID: \"e7dde429-e84e-48dd-a0dc-1bb66d082748\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.748588 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2c67b6df-5032-47d4-b3d9-c98e925a80b1-signing-key\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.749043 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ca70110e-404a-459f-adde-ca66c6bd8f74-proxy-tls\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.749404 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e6a891c-4066-434e-8d84-ed9038be6f2f-webhook-cert\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.749463 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/673012e1-2884-444d-80c8-a2007d1ecb96-cert\") pod \"ingress-canary-79pb2\" (UID: \"673012e1-2884-444d-80c8-a2007d1ecb96\") " pod="openshift-ingress-canary/ingress-canary-79pb2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.749488 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2c67b6df-5032-47d4-b3d9-c98e925a80b1-signing-cabundle\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.749842 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c21068e-0ce0-4a6e-b41d-985df443a6a7-metrics-certs\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.751430 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c920bc9-abe9-48c5-8124-f15727832b2e-secret-volume\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.754526 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/efc1a80b-89db-4363-a441-b02ed373b2c7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.755870 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7mt2\" (UniqueName: \"kubernetes.io/projected/4c21068e-0ce0-4a6e-b41d-985df443a6a7-kube-api-access-t7mt2\") pod \"router-default-5444994796-rfr7v\" (UID: \"4c21068e-0ce0-4a6e-b41d-985df443a6a7\") " pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.766424 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-bound-sa-token\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.769134 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz"] Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.769788 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5zv\" (UniqueName: \"kubernetes.io/projected/9c920bc9-abe9-48c5-8124-f15727832b2e-kube-api-access-jz5zv\") pod \"collect-profiles-29405820-qwzsv\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.777150 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[4710]: E1128 17:01:07.782590 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.282549258 +0000 UTC m=+157.540849303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.787326 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9"] Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.792746 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n97qt\" (UniqueName: \"kubernetes.io/projected/bf59eade-a8ba-4951-ade9-090baf203a1f-kube-api-access-n97qt\") pod \"multus-admission-controller-857f4d67dd-8thtd\" (UID: \"bf59eade-a8ba-4951-ade9-090baf203a1f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.792949 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-z5klw"] Nov 28 17:01:07 crc kubenswrapper[4710]: W1128 17:01:07.796549 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2d0a06_b022_4bb6_9e49_b601359f5e4e.slice/crio-a582bd49517c7498d424a94bbb0d7d23f39610bf4a642c917d8a1014ad3c277c WatchSource:0}: Error finding container a582bd49517c7498d424a94bbb0d7d23f39610bf4a642c917d8a1014ad3c277c: Status 404 returned error can't find the container with id a582bd49517c7498d424a94bbb0d7d23f39610bf4a642c917d8a1014ad3c277c Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.799409 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.802603 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj5xh\" (UniqueName: \"kubernetes.io/projected/c864ba87-2e40-4494-a652-20c34119e1c3-kube-api-access-rj5xh\") pod \"olm-operator-6b444d44fb-l5pfv\" (UID: \"c864ba87-2e40-4494-a652-20c34119e1c3\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.823603 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ncms\" (UniqueName: \"kubernetes.io/projected/19d00c4f-97cb-47db-abeb-2b29db7e427a-kube-api-access-6ncms\") pod \"catalog-operator-68c6474976-g7sjn\" (UID: \"19d00c4f-97cb-47db-abeb-2b29db7e427a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.825505 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd"] Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.843167 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nxwv\" (UniqueName: \"kubernetes.io/projected/4e952e15-9cb9-491e-b6cb-afd314e72291-kube-api-access-8nxwv\") pod \"machine-config-server-lmtkf\" (UID: \"4e952e15-9cb9-491e-b6cb-afd314e72291\") " pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.851148 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:07 crc kubenswrapper[4710]: W1128 17:01:07.858447 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30c469db_4972_46bb_8960_24891a1010b3.slice/crio-9d11d2346fb4e2e5753c2398a5f792006ebee2d194759f7da9a9e5653fddd33e WatchSource:0}: Error finding container 9d11d2346fb4e2e5753c2398a5f792006ebee2d194759f7da9a9e5653fddd33e: Status 404 returned error can't find the container with id 9d11d2346fb4e2e5753c2398a5f792006ebee2d194759f7da9a9e5653fddd33e Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.866121 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/efc1a80b-89db-4363-a441-b02ed373b2c7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-7jcgx\" (UID: \"efc1a80b-89db-4363-a441-b02ed373b2c7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.881508 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v7m54"] Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.882127 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: E1128 17:01:07.882534 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.382523198 +0000 UTC m=+157.640823243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.883314 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hzkl\" (UniqueName: \"kubernetes.io/projected/d50b14ce-cd9a-4737-9f55-dea3c5890d2d-kube-api-access-9hzkl\") pod \"kube-storage-version-migrator-operator-b67b599dd-557n9\" (UID: \"d50b14ce-cd9a-4737-9f55-dea3c5890d2d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:07 crc kubenswrapper[4710]: W1128 17:01:07.885151 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c21068e_0ce0_4a6e_b41d_985df443a6a7.slice/crio-98583d4e30180616f5e7afe952954396b0020ad4d9447c21d8fe1d274dcc5e80 WatchSource:0}: Error finding container 98583d4e30180616f5e7afe952954396b0020ad4d9447c21d8fe1d274dcc5e80: Status 404 returned error can't find the container with id 98583d4e30180616f5e7afe952954396b0020ad4d9447c21d8fe1d274dcc5e80 Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.889655 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.895208 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" event={"ID":"30c469db-4972-46bb-8960-24891a1010b3","Type":"ContainerStarted","Data":"9d11d2346fb4e2e5753c2398a5f792006ebee2d194759f7da9a9e5653fddd33e"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.901147 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lmtkf" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.902723 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzqsq\" (UniqueName: \"kubernetes.io/projected/639a9052-76a6-4248-99cd-4638000730de-kube-api-access-lzqsq\") pod \"csi-hostpathplugin-hf7ls\" (UID: \"639a9052-76a6-4248-99cd-4638000730de\") " pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.917819 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" event={"ID":"636f3f84-f74c-44ab-b740-9919994c2a3b","Type":"ContainerStarted","Data":"7e0c9358eda0792b1f8bd230ddf0c4c730f802d24853199656e56a974b78f7b4"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.918344 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" event={"ID":"636f3f84-f74c-44ab-b740-9919994c2a3b","Type":"ContainerStarted","Data":"b1e26aed4f6bad4cabb36a07dd687bb485123744dd33eb035dc53a765db9b971"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.919390 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" event={"ID":"1f2d0a06-b022-4bb6-9e49-b601359f5e4e","Type":"ContainerStarted","Data":"a582bd49517c7498d424a94bbb0d7d23f39610bf4a642c917d8a1014ad3c277c"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.920386 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" event={"ID":"e1fe1016-39da-42d0-9d25-818227699166","Type":"ContainerStarted","Data":"b80dd293377d05c6fcc2b15002f335678657332b07d938888f1af495b76ab51f"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.921164 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" event={"ID":"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc","Type":"ContainerStarted","Data":"f09e4eddc72cddd85d6f5bf5cb370f48ed0e22461a48f250411b1fe7464bfd5c"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.921855 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" event={"ID":"411f84b6-6676-4b0a-957c-eff49570cc88","Type":"ContainerStarted","Data":"21c2f1b725c6613aaffcc1ca23f4fbadf114b19dac4d18b9bae286781c8bfdeb"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.924568 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rfr7v" event={"ID":"4c21068e-0ce0-4a6e-b41d-985df443a6a7","Type":"ContainerStarted","Data":"98583d4e30180616f5e7afe952954396b0020ad4d9447c21d8fe1d274dcc5e80"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.924959 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.929122 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" event={"ID":"6fd0e719-abfd-4656-bacb-f003d9cee909","Type":"ContainerStarted","Data":"15cfd55b80708a3bdced2ab4a8d9ca7f675d6ed136cfa9309cc73e2b2149b249"} Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.935179 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.938326 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.941241 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdjvv\" (UniqueName: \"kubernetes.io/projected/119dccaa-966c-49ef-8c37-d5cf86e23cf7-kube-api-access-tdjvv\") pod \"package-server-manager-789f6589d5-cflgb\" (UID: \"119dccaa-966c-49ef-8c37-d5cf86e23cf7\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.953687 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-282rn" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.960553 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msxvx\" (UniqueName: \"kubernetes.io/projected/55c290da-674e-4137-8fa3-97ea8353bf26-kube-api-access-msxvx\") pod \"machine-config-operator-74547568cd-swfl4\" (UID: \"55c290da-674e-4137-8fa3-97ea8353bf26\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.980337 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62zfb\" (UniqueName: \"kubernetes.io/projected/ca70110e-404a-459f-adde-ca66c6bd8f74-kube-api-access-62zfb\") pod \"machine-config-controller-84d6567774-k9mc2\" (UID: \"ca70110e-404a-459f-adde-ca66c6bd8f74\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.982915 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.983182 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvwnl\" (UniqueName: \"kubernetes.io/projected/e7dde429-e84e-48dd-a0dc-1bb66d082748-kube-api-access-nvwnl\") pod \"control-plane-machine-set-operator-78cbb6b69f-d8tl4\" (UID: \"e7dde429-e84e-48dd-a0dc-1bb66d082748\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" Nov 28 17:01:07 crc kubenswrapper[4710]: E1128 17:01:07.983294 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.483265554 +0000 UTC m=+157.741565609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:07 crc kubenswrapper[4710]: I1128 17:01:07.983467 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:07 crc kubenswrapper[4710]: E1128 17:01:07.984066 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.48405825 +0000 UTC m=+157.742358295 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.016733 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwcsz\" (UniqueName: \"kubernetes.io/projected/3e29ad41-712f-4502-bdec-aad915a5cefc-kube-api-access-kwcsz\") pod \"service-ca-operator-777779d784-jkbxp\" (UID: \"3e29ad41-712f-4502-bdec-aad915a5cefc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.023519 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqt8\" (UniqueName: \"kubernetes.io/projected/af58f5bc-2ecb-49c9-91e5-dca036a205ef-kube-api-access-bbqt8\") pod \"dns-operator-744455d44c-n82pb\" (UID: \"af58f5bc-2ecb-49c9-91e5-dca036a205ef\") " pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.044055 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lknpm\" (UniqueName: \"kubernetes.io/projected/2c67b6df-5032-47d4-b3d9-c98e925a80b1-kube-api-access-lknpm\") pod \"service-ca-9c57cc56f-p7pp6\" (UID: \"2c67b6df-5032-47d4-b3d9-c98e925a80b1\") " pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.049431 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.056164 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.062466 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.063116 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fxh4\" (UniqueName: \"kubernetes.io/projected/4b76130c-96ae-4153-b99c-b7e938e8b71c-kube-api-access-8fxh4\") pod \"migrator-59844c95c7-2mtxd\" (UID: \"4b76130c-96ae-4153-b99c-b7e938e8b71c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.068975 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.078578 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.082343 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.082397 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-ncq9p"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.082981 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.084351 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.084655 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.58463828 +0000 UTC m=+157.842938325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.084730 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.095639 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.100210 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvk6b\" (UniqueName: \"kubernetes.io/projected/93f56c4d-2217-41d4-82dc-aef9c5b5096e-kube-api-access-wvk6b\") pod \"marketplace-operator-79b997595-vbg64\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.122594 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhttc\" (UniqueName: \"kubernetes.io/projected/0e6a891c-4066-434e-8d84-ed9038be6f2f-kube-api-access-qhttc\") pod \"packageserver-d55dfcdfc-sgkms\" (UID: \"0e6a891c-4066-434e-8d84-ed9038be6f2f\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.128944 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n57zw\" (UniqueName: \"kubernetes.io/projected/673012e1-2884-444d-80c8-a2007d1ecb96-kube-api-access-n57zw\") pod \"ingress-canary-79pb2\" (UID: \"673012e1-2884-444d-80c8-a2007d1ecb96\") " pod="openshift-ingress-canary/ingress-canary-79pb2" Nov 28 17:01:08 crc kubenswrapper[4710]: W1128 17:01:08.133669 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbca7c24d_4634_4d32_a234_2c33cc0bf842.slice/crio-dbf5aee20540fff9063068ce332b607370e13dee0ede7f3f0425994c77aa2ef0 WatchSource:0}: Error finding container dbf5aee20540fff9063068ce332b607370e13dee0ede7f3f0425994c77aa2ef0: Status 404 returned error can't find the container with id dbf5aee20540fff9063068ce332b607370e13dee0ede7f3f0425994c77aa2ef0 Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.135381 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.143660 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.144435 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-z7cgp"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.149599 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k7z7\" (UniqueName: \"kubernetes.io/projected/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-kube-api-access-2k7z7\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.155040 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.156968 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4bldc"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.158188 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.165100 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.172276 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.183590 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c55s8\" (UniqueName: \"kubernetes.io/projected/4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc-kube-api-access-c55s8\") pod \"dns-default-rk9hm\" (UID: \"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc\") " pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.183840 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.184057 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-stcdf"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.190686 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/524b05ad-4b2c-4aa9-9851-5c0b4ee8556b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-5hfcn\" (UID: \"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:08 crc kubenswrapper[4710]: W1128 17:01:08.193270 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod803dde31_2ca7_49ad_9db2_7b98dd682b99.slice/crio-653297a310452d8d2b4d198d7e41b9dbaf2ea24310adfdac747f164b6666f9db WatchSource:0}: Error finding container 653297a310452d8d2b4d198d7e41b9dbaf2ea24310adfdac747f164b6666f9db: Status 404 returned error can't find the container with id 653297a310452d8d2b4d198d7e41b9dbaf2ea24310adfdac747f164b6666f9db Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.196424 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.204049 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grrhv\" (UniqueName: \"kubernetes.io/projected/e65664f4-d101-4115-8bf7-751bb2276527-kube-api-access-grrhv\") pod \"etcd-operator-b45778765-xpfn7\" (UID: \"e65664f4-d101-4115-8bf7-751bb2276527\") " pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.204296 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.704276437 +0000 UTC m=+157.962576482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.212926 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-79pb2" Nov 28 17:01:08 crc kubenswrapper[4710]: W1128 17:01:08.218874 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2afb75a4_3327_4ac7_b503_a5bfbf6f3fa3.slice/crio-9b794631d23025612db4cd0dc4d84121fb726661e2ad09c3d22241a3722ad698 WatchSource:0}: Error finding container 9b794631d23025612db4cd0dc4d84121fb726661e2ad09c3d22241a3722ad698: Status 404 returned error can't find the container with id 9b794631d23025612db4cd0dc4d84121fb726661e2ad09c3d22241a3722ad698 Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.220267 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rzq8k\" (UID: \"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.237918 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.315323 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.317188 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.817159795 +0000 UTC m=+158.075459830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.317342 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.317926 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.817908289 +0000 UTC m=+158.076208334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.325009 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.343746 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.357185 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv"] Nov 28 17:01:08 crc kubenswrapper[4710]: W1128 17:01:08.421446 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc864ba87_2e40_4494_a652_20c34119e1c3.slice/crio-54a4c14ece76285145756b0c80647fc874874610e30cdce81ed9dfca6bb40369 WatchSource:0}: Error finding container 54a4c14ece76285145756b0c80647fc874874610e30cdce81ed9dfca6bb40369: Status 404 returned error can't find the container with id 54a4c14ece76285145756b0c80647fc874874610e30cdce81ed9dfca6bb40369 Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.423912 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.424143 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.924117681 +0000 UTC m=+158.182417726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.424216 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.424604 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:08.924595867 +0000 UTC m=+158.182895912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.432570 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-282rn"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.477668 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.497268 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.526055 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.526376 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.026361145 +0000 UTC m=+158.284661190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.556737 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hf7ls"] Nov 28 17:01:08 crc kubenswrapper[4710]: W1128 17:01:08.624068 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1688c24e_0457_4929_a3c8_5feb624c8b11.slice/crio-ed6aa944b679b18dc61548cc89f2c46a0f1fa08f54f90e4cbd8622a19493367c WatchSource:0}: Error finding container ed6aa944b679b18dc61548cc89f2c46a0f1fa08f54f90e4cbd8622a19493367c: Status 404 returned error can't find the container with id ed6aa944b679b18dc61548cc89f2c46a0f1fa08f54f90e4cbd8622a19493367c Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.627250 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.628241 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.128224988 +0000 UTC m=+158.386525033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: W1128 17:01:08.696736 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod639a9052_76a6_4248_99cd_4638000730de.slice/crio-f8e69c23151e0882ed250d876471dcffc3da951deb954ba81b6cae742f751375 WatchSource:0}: Error finding container f8e69c23151e0882ed250d876471dcffc3da951deb954ba81b6cae742f751375: Status 404 returned error can't find the container with id f8e69c23151e0882ed250d876471dcffc3da951deb954ba81b6cae742f751375 Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.713437 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.728287 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.728606 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.228591841 +0000 UTC m=+158.486891886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.813693 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.822292 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.968669 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-8thtd"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.972706 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:08 crc kubenswrapper[4710]: E1128 17:01:08.973155 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.473138964 +0000 UTC m=+158.731439019 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.987719 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4"] Nov 28 17:01:08 crc kubenswrapper[4710]: I1128 17:01:08.996954 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" event={"ID":"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68","Type":"ContainerStarted","Data":"be8e079b0c5f33f1d442fac8cdbd0f59579f8cee8da62b6f5a0074025df31fb3"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.016439 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" event={"ID":"8b390d2f-0343-4f77-a3a3-196d446347cb","Type":"ContainerStarted","Data":"876d9ccecd47da73a34924fe64396f633a9f06c73ed74dd7fdd5c3951d354362"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.016492 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" event={"ID":"8b390d2f-0343-4f77-a3a3-196d446347cb","Type":"ContainerStarted","Data":"db2d086ab4696837aacc39af18381b05cbe68fa31d543ba4ce4d00ffba11b7a2"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.020534 4710 generic.go:334] "Generic (PLEG): container finished" podID="6fd0e719-abfd-4656-bacb-f003d9cee909" containerID="5ba67febf20f05543dd8dd5a1692553a03398180a8b043273a128175a10955d1" exitCode=0 Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.020578 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" event={"ID":"6fd0e719-abfd-4656-bacb-f003d9cee909","Type":"ContainerDied","Data":"5ba67febf20f05543dd8dd5a1692553a03398180a8b043273a128175a10955d1"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.026573 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" event={"ID":"e1fe1016-39da-42d0-9d25-818227699166","Type":"ContainerStarted","Data":"2620b1b4689a73ac2a9cf85dd7d2ce860a9edf16a3a98a2a2a5cef25d89a2636"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.028343 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" event={"ID":"9c920bc9-abe9-48c5-8124-f15727832b2e","Type":"ContainerStarted","Data":"db4de325a8a9dc14f4775c7498b5eeafcda02aa11151d02e56967b8bebf1e021"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.036953 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-rfr7v" event={"ID":"4c21068e-0ce0-4a6e-b41d-985df443a6a7","Type":"ContainerStarted","Data":"24185b8bc0a8841abf2b9848dc90b23e9096822fe0dfe06f90a4e95d68264416"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.039463 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-stcdf" event={"ID":"38c6e200-b005-41fe-902b-1f5fc2f9039d","Type":"ContainerStarted","Data":"91cfc9ad07d87e92fd3ea67d2f981b71895d9264403b696ccf712b4893aae8e3"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.041039 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" event={"ID":"639a9052-76a6-4248-99cd-4638000730de","Type":"ContainerStarted","Data":"f8e69c23151e0882ed250d876471dcffc3da951deb954ba81b6cae742f751375"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.050088 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" event={"ID":"30c469db-4972-46bb-8960-24891a1010b3","Type":"ContainerStarted","Data":"2b03b027a6759c69e0f06524244663146d1cc671c0ffe4bde1d89588a84ce234"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.059549 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-282rn" event={"ID":"1688c24e-0457-4929-a3c8-5feb624c8b11","Type":"ContainerStarted","Data":"ed6aa944b679b18dc61548cc89f2c46a0f1fa08f54f90e4cbd8622a19493367c"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.069101 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" event={"ID":"803dde31-2ca7-49ad-9db2-7b98dd682b99","Type":"ContainerStarted","Data":"653297a310452d8d2b4d198d7e41b9dbaf2ea24310adfdac747f164b6666f9db"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.074139 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z7cgp" event={"ID":"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3","Type":"ContainerStarted","Data":"9b794631d23025612db4cd0dc4d84121fb726661e2ad09c3d22241a3722ad698"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.074162 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.074469 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.574449028 +0000 UTC m=+158.832749073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.074498 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.076066 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.576034569 +0000 UTC m=+158.834334614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.090997 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" event={"ID":"d933366c-bee9-4d19-8152-b4401d886b35","Type":"ContainerStarted","Data":"e679a26314d8e7cd8114263132128846c177171b90c2ece40118fddb1a4248e7"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.094403 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lmtkf" event={"ID":"4e952e15-9cb9-491e-b6cb-afd314e72291","Type":"ContainerStarted","Data":"a96560e14eb00bb18af1305843712ef68e76541dbc497f6d5a8b8a326eabf3cb"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.096687 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" event={"ID":"110d7e0f-d9ae-4b26-8846-685f3c4bb6fc","Type":"ContainerStarted","Data":"dfd5534f978794f6037d0dea26c5a04c009270d78ff0d2a75aa294cff22ca833"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.177543 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.177959 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.677908202 +0000 UTC m=+158.936208247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.178362 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.178829 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.678816821 +0000 UTC m=+158.937116866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.213139 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.213172 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" event={"ID":"5a82d2d7-4966-4dff-b1bf-5995aedd9fae","Type":"ContainerStarted","Data":"40e260f6329a2be33482c379bfcc8fb61d36893ee653405702e8503da6a9f658"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.213188 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" event={"ID":"5a82d2d7-4966-4dff-b1bf-5995aedd9fae","Type":"ContainerStarted","Data":"5a3f8eb724e64786a9c08c75847247f8fe5afe7542a7cc0c8d9255c2f527a9a2"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.219502 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" event={"ID":"636f3f84-f74c-44ab-b740-9919994c2a3b","Type":"ContainerStarted","Data":"5776c86e7f76a54b3e0a4de7999ee5ece983c639ecaba9e76fc7ab991efcdab5"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.228923 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" event={"ID":"bca7c24d-4634-4d32-a234-2c33cc0bf842","Type":"ContainerStarted","Data":"dbf5aee20540fff9063068ce332b607370e13dee0ede7f3f0425994c77aa2ef0"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.248501 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" event={"ID":"c864ba87-2e40-4494-a652-20c34119e1c3","Type":"ContainerStarted","Data":"54a4c14ece76285145756b0c80647fc874874610e30cdce81ed9dfca6bb40369"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.282920 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.283935 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.783918618 +0000 UTC m=+159.042218663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.284464 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" event={"ID":"411f84b6-6676-4b0a-957c-eff49570cc88","Type":"ContainerStarted","Data":"b16b1303a5147032df30a83d8d3b045358cf51d65b7b3d4fac8293c5a328f7a5"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.285642 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.292925 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.298178 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.309340 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbg64"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.311397 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.311708 4710 generic.go:334] "Generic (PLEG): container finished" podID="1f2d0a06-b022-4bb6-9e49-b601359f5e4e" containerID="e2642c357788849e0d2247fd7e295923fd291b6226c3ea11ccf2ce3079e2d72e" exitCode=0 Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.311745 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" event={"ID":"1f2d0a06-b022-4bb6-9e49-b601359f5e4e","Type":"ContainerDied","Data":"e2642c357788849e0d2247fd7e295923fd291b6226c3ea11ccf2ce3079e2d72e"} Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.384271 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.384680 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.884665094 +0000 UTC m=+159.142965139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.486975 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.487152 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.987127584 +0000 UTC m=+159.245427669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.487392 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.488992 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:09.988981045 +0000 UTC m=+159.247281090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.576030 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-79pb2"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.587897 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.588065 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.088042106 +0000 UTC m=+159.346342151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.588372 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.588733 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.088718988 +0000 UTC m=+159.347019043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.650601 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.656734 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-n82pb"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.689224 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.689372 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.18934495 +0000 UTC m=+159.447644995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.689399 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.689710 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.189703401 +0000 UTC m=+159.448003436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.768116 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9mxpx" podStartSLOduration=135.768099375 podStartE2EDuration="2m15.768099375s" podCreationTimestamp="2025-11-28 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:09.738380635 +0000 UTC m=+158.996680680" watchObservedRunningTime="2025-11-28 17:01:09.768099375 +0000 UTC m=+159.026399420" Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.789940 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.790130 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.290108446 +0000 UTC m=+159.548408491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: W1128 17:01:09.794250 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefc1a80b_89db_4363_a441_b02ed373b2c7.slice/crio-f697d3d7f4fe1bf3c0de9b6e6604ddafc6b53fbeda7ae4c2b7f8e11a70032fb2 WatchSource:0}: Error finding container f697d3d7f4fe1bf3c0de9b6e6604ddafc6b53fbeda7ae4c2b7f8e11a70032fb2: Status 404 returned error can't find the container with id f697d3d7f4fe1bf3c0de9b6e6604ddafc6b53fbeda7ae4c2b7f8e11a70032fb2 Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.802019 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" podStartSLOduration=134.802002091 podStartE2EDuration="2m14.802002091s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:09.799553692 +0000 UTC m=+159.057853737" watchObservedRunningTime="2025-11-28 17:01:09.802002091 +0000 UTC m=+159.060302126" Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.803922 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.817013 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:09 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:09 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:09 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.817046 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:09 crc kubenswrapper[4710]: W1128 17:01:09.820045 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf58f5bc_2ecb_49c9_91e5_dca036a205ef.slice/crio-6e832d134663032ab14c8b395774ac4b1cc7600c2ca706c82fb9aca5871d28a6 WatchSource:0}: Error finding container 6e832d134663032ab14c8b395774ac4b1cc7600c2ca706c82fb9aca5871d28a6: Status 404 returned error can't find the container with id 6e832d134663032ab14c8b395774ac4b1cc7600c2ca706c82fb9aca5871d28a6 Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.833703 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rk9hm"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.852541 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.853472 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-nfrwd" podStartSLOduration=134.853459983 podStartE2EDuration="2m14.853459983s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:09.844106982 +0000 UTC m=+159.102407027" watchObservedRunningTime="2025-11-28 17:01:09.853459983 +0000 UTC m=+159.111760028" Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.861780 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.863595 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.880137 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-xpfn7"] Nov 28 17:01:09 crc kubenswrapper[4710]: W1128 17:01:09.885308 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dc8675b_7fdd_4887_a46b_19bd9b3fb5bc.slice/crio-87c6fed0f1b1bd72e47eba1ba5c6e80964ea7e09023c9f22410f9743a9023fcb WatchSource:0}: Error finding container 87c6fed0f1b1bd72e47eba1ba5c6e80964ea7e09023c9f22410f9743a9023fcb: Status 404 returned error can't find the container with id 87c6fed0f1b1bd72e47eba1ba5c6e80964ea7e09023c9f22410f9743a9023fcb Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.892538 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.892970 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.39295879 +0000 UTC m=+159.651258835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.939371 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-7z6nl" podStartSLOduration=135.939340999 podStartE2EDuration="2m15.939340999s" podCreationTimestamp="2025-11-28 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:09.889492258 +0000 UTC m=+159.147792303" watchObservedRunningTime="2025-11-28 17:01:09.939340999 +0000 UTC m=+159.197641044" Nov 28 17:01:09 crc kubenswrapper[4710]: W1128 17:01:09.944867 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod524b05ad_4b2c_4aa9_9851_5c0b4ee8556b.slice/crio-c3419ba8461ca02e7b9796611a28232ee0a4ecc3787310e855a90531206b81f7 WatchSource:0}: Error finding container c3419ba8461ca02e7b9796611a28232ee0a4ecc3787310e855a90531206b81f7: Status 404 returned error can't find the container with id c3419ba8461ca02e7b9796611a28232ee0a4ecc3787310e855a90531206b81f7 Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.959589 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.961311 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-p7pp6"] Nov 28 17:01:09 crc kubenswrapper[4710]: I1128 17:01:09.998620 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:09 crc kubenswrapper[4710]: E1128 17:01:09.998986 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.498971616 +0000 UTC m=+159.757271661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.044501 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp"] Nov 28 17:01:10 crc kubenswrapper[4710]: W1128 17:01:10.082350 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7dde429_e84e_48dd_a0dc_1bb66d082748.slice/crio-30daba799a8e3be724587f522a2234a69a7ac2767072f0ed47ad982a8ce43621 WatchSource:0}: Error finding container 30daba799a8e3be724587f522a2234a69a7ac2767072f0ed47ad982a8ce43621: Status 404 returned error can't find the container with id 30daba799a8e3be724587f522a2234a69a7ac2767072f0ed47ad982a8ce43621 Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.099653 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.100037 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.600023851 +0000 UTC m=+159.858323906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: W1128 17:01:10.126153 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e29ad41_712f_4502_bdec_aad915a5cefc.slice/crio-2d0369ffa697a26b1dd6b7d0745b333aa6b9ebcf75a9e087b1d44e1ded74936b WatchSource:0}: Error finding container 2d0369ffa697a26b1dd6b7d0745b333aa6b9ebcf75a9e087b1d44e1ded74936b: Status 404 returned error can't find the container with id 2d0369ffa697a26b1dd6b7d0745b333aa6b9ebcf75a9e087b1d44e1ded74936b Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.143725 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-rfr7v" podStartSLOduration=135.143706654 podStartE2EDuration="2m15.143706654s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:10.063842562 +0000 UTC m=+159.322142607" watchObservedRunningTime="2025-11-28 17:01:10.143706654 +0000 UTC m=+159.402006699" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.151945 4710 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-v7m54 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.22:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.152348 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" podUID="5a82d2d7-4966-4dff-b1bf-5995aedd9fae" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.22:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.151998 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k"] Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.188328 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" podStartSLOduration=136.188311505 podStartE2EDuration="2m16.188311505s" podCreationTimestamp="2025-11-28 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:10.187504389 +0000 UTC m=+159.445804434" watchObservedRunningTime="2025-11-28 17:01:10.188311505 +0000 UTC m=+159.446611550" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.200453 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.201571 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.701556843 +0000 UTC m=+159.959856888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.302360 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.302669 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.8026572 +0000 UTC m=+160.060957245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.326130 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" event={"ID":"e1fe1016-39da-42d0-9d25-818227699166","Type":"ContainerStarted","Data":"6c67909e38e649c135ea2a0591901c57414ac4660e67bada22ee566d1053fdba"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.327669 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-79pb2" event={"ID":"673012e1-2884-444d-80c8-a2007d1ecb96","Type":"ContainerStarted","Data":"a59c69ed8cd45968ee937645aadd8a7541426877ac31884341e550d8fcf31a19"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.333619 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" event={"ID":"6e3e3a1c-47ab-4aea-9a12-6323314ca17a","Type":"ContainerStarted","Data":"57afe2d80ea79715cbb5134d97929fd5fdce1698cf48255458e6ff8a2afc5edc"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.333673 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" event={"ID":"6e3e3a1c-47ab-4aea-9a12-6323314ca17a","Type":"ContainerStarted","Data":"0eb49b6c226bad3e74f7ace14de6f2f901ac75c1c1e5156d2a6623e5da6c9016"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.341902 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" event={"ID":"e65664f4-d101-4115-8bf7-751bb2276527","Type":"ContainerStarted","Data":"357ecc836a0b8c1ec56434db71b577389552d479331abd9b5bb7686694aee68d"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.344514 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" event={"ID":"0e6a891c-4066-434e-8d84-ed9038be6f2f","Type":"ContainerStarted","Data":"fd1a557c73cfc248f75e109be773416b360a9e87f25814f5bed36bea677cd82c"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.344550 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" event={"ID":"0e6a891c-4066-434e-8d84-ed9038be6f2f","Type":"ContainerStarted","Data":"d9abd5a346b6193031fd1bd7988134a283e19d3b8e80119dd3c85ceae1759a2d"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.345571 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.348341 4710 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-sgkms container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.348391 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" podUID="0e6a891c-4066-434e-8d84-ed9038be6f2f" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.349817 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" event={"ID":"119dccaa-966c-49ef-8c37-d5cf86e23cf7","Type":"ContainerStarted","Data":"3c90b9dc488055280860ea1135919173b2c08181d0bdc88243628a8fa67cdffc"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.349852 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" event={"ID":"119dccaa-966c-49ef-8c37-d5cf86e23cf7","Type":"ContainerStarted","Data":"122a9d56b1b58b27aed307f5f4390aeb7f117820298d80241cb401d155962863"} Nov 28 17:01:10 crc kubenswrapper[4710]: W1128 17:01:10.367602 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63bedb67_2a2d_4b3b_b28e_72cf2d8f0e84.slice/crio-f0dbbd738be94cc1468774c752c920e6c83e03ae2d59919804fdee7371a4aa48 WatchSource:0}: Error finding container f0dbbd738be94cc1468774c752c920e6c83e03ae2d59919804fdee7371a4aa48: Status 404 returned error can't find the container with id f0dbbd738be94cc1468774c752c920e6c83e03ae2d59919804fdee7371a4aa48 Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.374035 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" event={"ID":"93f56c4d-2217-41d4-82dc-aef9c5b5096e","Type":"ContainerStarted","Data":"6dc44c8c67d26301267d670a7e49ff4f2cfca7a97f8f519941e39e329b413712"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.381188 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" podStartSLOduration=135.381172248 podStartE2EDuration="2m15.381172248s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:10.380647151 +0000 UTC m=+159.638947196" watchObservedRunningTime="2025-11-28 17:01:10.381172248 +0000 UTC m=+159.639472293" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.402966 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.403987 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:10.903972035 +0000 UTC m=+160.162272080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.409666 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-stcdf" event={"ID":"38c6e200-b005-41fe-902b-1f5fc2f9039d","Type":"ContainerStarted","Data":"85258126c98bdf151492942926f641589c38be57dff4831e37bfe2ce57e37fc7"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.431009 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" event={"ID":"19d00c4f-97cb-47db-abeb-2b29db7e427a","Type":"ContainerStarted","Data":"9e6edec70a2971256d358fe532016994e2f4438952a579f6112d8bc085048829"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.435989 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" event={"ID":"3e29ad41-712f-4502-bdec-aad915a5cefc","Type":"ContainerStarted","Data":"2d0369ffa697a26b1dd6b7d0745b333aa6b9ebcf75a9e087b1d44e1ded74936b"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.438035 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" event={"ID":"6fd0e719-abfd-4656-bacb-f003d9cee909","Type":"ContainerStarted","Data":"ce6b93fca822dd2146086676e414a46d05894c5cea3e4192d6d2c19032ea9a93"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.444719 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" event={"ID":"8b390d2f-0343-4f77-a3a3-196d446347cb","Type":"ContainerStarted","Data":"86035f7d3c8605a9cd0c548512b8e3bf7c39b65aef8034f379cbbc721e9d7680"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.466363 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" event={"ID":"1f2d0a06-b022-4bb6-9e49-b601359f5e4e","Type":"ContainerStarted","Data":"1951f4b48353f87e90f125de39bfa5f9cff4c2c5cdbd836edead1e65bcb6e3d9"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.467444 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" event={"ID":"2c67b6df-5032-47d4-b3d9-c98e925a80b1","Type":"ContainerStarted","Data":"560b22ee05b0905de458366612a7cd22eac64ab42c5e432c5545e7dd8281ef69"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.479878 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" event={"ID":"d50b14ce-cd9a-4737-9f55-dea3c5890d2d","Type":"ContainerStarted","Data":"df9ad5ae33e647be1e2fba972346cf2b8604fc702c7108a5c6d0a4e743684d4d"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.479924 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" event={"ID":"d50b14ce-cd9a-4737-9f55-dea3c5890d2d","Type":"ContainerStarted","Data":"5f120a0eeaeacb2f3116a24640d403a6846debee63685ef965c451efb085615e"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.518135 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.518548 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.018532817 +0000 UTC m=+160.276832862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.526131 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-282rn" event={"ID":"1688c24e-0457-4929-a3c8-5feb624c8b11","Type":"ContainerStarted","Data":"e0e99dbf4a149169458f97983a733f8f59d7b9eaaca1ad123a07b5ec4247afaa"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.545703 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" event={"ID":"af58f5bc-2ecb-49c9-91e5-dca036a205ef","Type":"ContainerStarted","Data":"6e832d134663032ab14c8b395774ac4b1cc7600c2ca706c82fb9aca5871d28a6"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.578996 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" event={"ID":"e7dde429-e84e-48dd-a0dc-1bb66d082748","Type":"ContainerStarted","Data":"30daba799a8e3be724587f522a2234a69a7ac2767072f0ed47ad982a8ce43621"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.582973 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" event={"ID":"efc1a80b-89db-4363-a441-b02ed373b2c7","Type":"ContainerStarted","Data":"f697d3d7f4fe1bf3c0de9b6e6604ddafc6b53fbeda7ae4c2b7f8e11a70032fb2"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.617707 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" event={"ID":"9c920bc9-abe9-48c5-8124-f15727832b2e","Type":"ContainerStarted","Data":"46151858bd429571482abdab7da8861e36883fff6031ee4929027487a96115ed"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.619261 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.619589 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.119571042 +0000 UTC m=+160.377871097 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.669258 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" event={"ID":"4b76130c-96ae-4153-b99c-b7e938e8b71c","Type":"ContainerStarted","Data":"6c117d85a1e65ae986e8964eccbb8ed38c6421d4f601413f4c7e5947450090c6"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.676036 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" podStartSLOduration=70.676024937 podStartE2EDuration="1m10.676024937s" podCreationTimestamp="2025-11-28 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:10.675466729 +0000 UTC m=+159.933766774" watchObservedRunningTime="2025-11-28 17:01:10.676024937 +0000 UTC m=+159.934324982" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.703336 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" event={"ID":"55c290da-674e-4137-8fa3-97ea8353bf26","Type":"ContainerStarted","Data":"4501616c2a31f67e77e6d0e056a0a4cde552c0e87539550ac65cf6041b56008c"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.703384 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" event={"ID":"55c290da-674e-4137-8fa3-97ea8353bf26","Type":"ContainerStarted","Data":"cda645a0905f20ee3414445b13d3f40c3538c9e9748bf2eb3eb682bc2bdf9217"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.738133 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.739072 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.239058523 +0000 UTC m=+160.497358568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.769038 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" event={"ID":"c864ba87-2e40-4494-a652-20c34119e1c3","Type":"ContainerStarted","Data":"48d0b93f81441915dfd61bf8a8267dd3fc08d70500a0305521cfc2fad65e92e6"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.769850 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.773874 4710 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-l5pfv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.777472 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" podUID="c864ba87-2e40-4494-a652-20c34119e1c3" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.816221 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:10 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:10 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:10 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.816782 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.818732 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" event={"ID":"bf59eade-a8ba-4951-ade9-090baf203a1f","Type":"ContainerStarted","Data":"2c5058644c109c8f507bbbfc620f7694333c241a953c8f5ef409aee4508a7034"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.818788 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" event={"ID":"bf59eade-a8ba-4951-ade9-090baf203a1f","Type":"ContainerStarted","Data":"54cf0352400727cc0738f4f901ed606f0f673c811b63f1f59853fe9b1590f894"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.820066 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" podStartSLOduration=135.820047051 podStartE2EDuration="2m15.820047051s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:10.817788048 +0000 UTC m=+160.076088093" watchObservedRunningTime="2025-11-28 17:01:10.820047051 +0000 UTC m=+160.078347106" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.830228 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lmtkf" event={"ID":"4e952e15-9cb9-491e-b6cb-afd314e72291","Type":"ContainerStarted","Data":"cde25b0d814ed235ecff5c102c258899001bd4dd7abdf44befa45c40b1c581ba"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.840784 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.841256 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.341226786 +0000 UTC m=+160.599526831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.841370 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.842389 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.342372842 +0000 UTC m=+160.600672887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.879669 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" event={"ID":"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68","Type":"ContainerStarted","Data":"247b4bb177d53c9f4adbeb9eaee143375a4e88d73114d333fbe5d056aefdec69"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.887639 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" event={"ID":"803dde31-2ca7-49ad-9db2-7b98dd682b99","Type":"ContainerStarted","Data":"fa1299cc1380e9bc2d3fbc5df4243bb0febc63af38824f3db53328201930decf"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.895515 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lmtkf" podStartSLOduration=5.895492559 podStartE2EDuration="5.895492559s" podCreationTimestamp="2025-11-28 17:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:10.893102692 +0000 UTC m=+160.151402737" watchObservedRunningTime="2025-11-28 17:01:10.895492559 +0000 UTC m=+160.153792604" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.905820 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" event={"ID":"bca7c24d-4634-4d32-a234-2c33cc0bf842","Type":"ContainerStarted","Data":"60d0d22995471e7aa1a1e5c8536006848c600742d5abce7efde842f6f2919c6a"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.925875 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-bvc8s" podStartSLOduration=135.92585727 podStartE2EDuration="2m15.92585727s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:10.92520425 +0000 UTC m=+160.183504295" watchObservedRunningTime="2025-11-28 17:01:10.92585727 +0000 UTC m=+160.184157315" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.941525 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" event={"ID":"d933366c-bee9-4d19-8152-b4401d886b35","Type":"ContainerStarted","Data":"7eab48f0a5d37cb5dc3f1ff0539c9cffe0f56f8796c129409b17079ee3ca7391"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.942490 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.942808 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:10 crc kubenswrapper[4710]: E1128 17:01:10.943610 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.443588203 +0000 UTC m=+160.701888308 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.943936 4710 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-kr9gw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.944049 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" podUID="d933366c-bee9-4d19-8152-b4401d886b35" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.947800 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" event={"ID":"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b","Type":"ContainerStarted","Data":"c3419ba8461ca02e7b9796611a28232ee0a4ecc3787310e855a90531206b81f7"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.966300 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-ncq9p" podStartSLOduration=136.966285037 podStartE2EDuration="2m16.966285037s" podCreationTimestamp="2025-11-28 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:10.963932751 +0000 UTC m=+160.222232796" watchObservedRunningTime="2025-11-28 17:01:10.966285037 +0000 UTC m=+160.224585082" Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.969013 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z7cgp" event={"ID":"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3","Type":"ContainerStarted","Data":"f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.997121 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" event={"ID":"ca70110e-404a-459f-adde-ca66c6bd8f74","Type":"ContainerStarted","Data":"f6b6ea6d59d741d155a4755f4daedc8eaa5294ae10ad5744073826b7c705284e"} Nov 28 17:01:10 crc kubenswrapper[4710]: I1128 17:01:10.997165 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" event={"ID":"ca70110e-404a-459f-adde-ca66c6bd8f74","Type":"ContainerStarted","Data":"df57b5d06bbca410132d70a17e904232859b3014e28aa9ad4041f46494aee54d"} Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.009408 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rk9hm" event={"ID":"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc","Type":"ContainerStarted","Data":"87c6fed0f1b1bd72e47eba1ba5c6e80964ea7e09023c9f22410f9743a9023fcb"} Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.040194 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.045167 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.045700 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.545681003 +0000 UTC m=+160.803981048 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.099562 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" podStartSLOduration=136.099544813 podStartE2EDuration="2m16.099544813s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:11.025520041 +0000 UTC m=+160.283820086" watchObservedRunningTime="2025-11-28 17:01:11.099544813 +0000 UTC m=+160.357844858" Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.145785 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.147117 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.64710168 +0000 UTC m=+160.905401725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.175213 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-z7cgp" podStartSLOduration=136.175196268 podStartE2EDuration="2m16.175196268s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:11.099033527 +0000 UTC m=+160.357333572" watchObservedRunningTime="2025-11-28 17:01:11.175196268 +0000 UTC m=+160.433496313" Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.255944 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.256291 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.756279808 +0000 UTC m=+161.014579843 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.356733 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.357134 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.857102257 +0000 UTC m=+161.115402302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.457985 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.458402 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:11.95838959 +0000 UTC m=+161.216689635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.559343 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.559793 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.059775317 +0000 UTC m=+161.318075362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.660451 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.661143 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.161126492 +0000 UTC m=+161.419426537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.766331 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.766636 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.266614861 +0000 UTC m=+161.524914906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.810999 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:11 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:11 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:11 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.811059 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.868027 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.868647 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.368635028 +0000 UTC m=+161.626935073 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:11 crc kubenswrapper[4710]: I1128 17:01:11.969272 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:11 crc kubenswrapper[4710]: E1128 17:01:11.969670 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.469655343 +0000 UTC m=+161.727955388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.070557 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.070923 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.570908565 +0000 UTC m=+161.829208610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.084029 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" event={"ID":"af58f5bc-2ecb-49c9-91e5-dca036a205ef","Type":"ContainerStarted","Data":"7a16a7ca3f8a817ae0911431db5ca58e5e3ddf4bd75d35ac03f69d5cf6c36e97"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.103003 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rk9hm" event={"ID":"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc","Type":"ContainerStarted","Data":"a5570506df3c9ea78679f9b40c3267501dcdf54d48d16a4733acd32796bbb599"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.119021 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-79pb2" event={"ID":"673012e1-2884-444d-80c8-a2007d1ecb96","Type":"ContainerStarted","Data":"3fb824bfbc74ca66c538ab540eb42d5f60c00bed8d514921f6ef167bcddf4f3e"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.138536 4710 generic.go:334] "Generic (PLEG): container finished" podID="6e3e3a1c-47ab-4aea-9a12-6323314ca17a" containerID="57afe2d80ea79715cbb5134d97929fd5fdce1698cf48255458e6ff8a2afc5edc" exitCode=0 Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.138603 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" event={"ID":"6e3e3a1c-47ab-4aea-9a12-6323314ca17a","Type":"ContainerDied","Data":"57afe2d80ea79715cbb5134d97929fd5fdce1698cf48255458e6ff8a2afc5edc"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.148593 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-79pb2" podStartSLOduration=7.148580255 podStartE2EDuration="7.148580255s" podCreationTimestamp="2025-11-28 17:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.144344238 +0000 UTC m=+161.402644283" watchObservedRunningTime="2025-11-28 17:01:12.148580255 +0000 UTC m=+161.406880300" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.177279 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.178306 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.678278065 +0000 UTC m=+161.936578120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.179519 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" event={"ID":"3e29ad41-712f-4502-bdec-aad915a5cefc","Type":"ContainerStarted","Data":"d6ac532fd3e1f02f38e68123b449dfb9efd64878174fbff1a5eb15c842ee786e"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.238695 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" event={"ID":"dbf102cd-dbbb-43e2-bbf2-8160d7ae5f68","Type":"ContainerStarted","Data":"2cf1e3c1ac928b357e411a76a44cd8ec2b669f4d3a5d862c8862335543e94e28"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.282808 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.284166 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.784151356 +0000 UTC m=+162.042451401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.308307 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" event={"ID":"2c67b6df-5032-47d4-b3d9-c98e925a80b1","Type":"ContainerStarted","Data":"fa71ff10a8ded93fa8fecda64b106d1fccb2485dea74c4dcca37ad93b17a5448"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.313724 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" event={"ID":"e7dde429-e84e-48dd-a0dc-1bb66d082748","Type":"ContainerStarted","Data":"b338b83cd4a07ab95f5dc40369ef0a49bfcf3912161058e11e3fb5457411868c"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.320221 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-jkbxp" podStartSLOduration=137.320196281 podStartE2EDuration="2m17.320196281s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.268080677 +0000 UTC m=+161.526380722" watchObservedRunningTime="2025-11-28 17:01:12.320196281 +0000 UTC m=+161.578496326" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.367039 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-p7pp6" podStartSLOduration=137.367022175 podStartE2EDuration="2m17.367022175s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.365923579 +0000 UTC m=+161.624223614" watchObservedRunningTime="2025-11-28 17:01:12.367022175 +0000 UTC m=+161.625322220" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.369145 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-smsqk" podStartSLOduration=137.369134923 podStartE2EDuration="2m17.369134923s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.312609066 +0000 UTC m=+161.570909111" watchObservedRunningTime="2025-11-28 17:01:12.369134923 +0000 UTC m=+161.627434958" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.370788 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" event={"ID":"e65664f4-d101-4115-8bf7-751bb2276527","Type":"ContainerStarted","Data":"c1897ede733d0700ee4a474877414646a8a5f12175c8f4812630f37ed16aa6c3"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.384294 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.387000 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.886976319 +0000 UTC m=+162.145276364 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.387107 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" event={"ID":"639a9052-76a6-4248-99cd-4638000730de","Type":"ContainerStarted","Data":"3e8befadca12cc0d1192de6c041544934f9f64d6d70e783954d3e91180264f91"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.402955 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" event={"ID":"efc1a80b-89db-4363-a441-b02ed373b2c7","Type":"ContainerStarted","Data":"9db8c9b43953d0582571e2a2eccc3b9508c55ad3c6994514fdb0f0847cc54a26"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.421943 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-d8tl4" podStartSLOduration=137.421926539 podStartE2EDuration="2m17.421926539s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.421144653 +0000 UTC m=+161.679444698" watchObservedRunningTime="2025-11-28 17:01:12.421926539 +0000 UTC m=+161.680226584" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.422497 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" event={"ID":"bf59eade-a8ba-4951-ade9-090baf203a1f","Type":"ContainerStarted","Data":"8c02b082f61eef5242f295dbf20efe8e49182e6f5a047146fc007af2944834a5"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.459594 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" event={"ID":"19d00c4f-97cb-47db-abeb-2b29db7e427a","Type":"ContainerStarted","Data":"9657283f6c3c13759c94d00696e3a6dcfd6db9d9b04a18084b3855ed728ddaf4"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.460557 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.478354 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-xpfn7" podStartSLOduration=137.478339232 podStartE2EDuration="2m17.478339232s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.476580485 +0000 UTC m=+161.734880530" watchObservedRunningTime="2025-11-28 17:01:12.478339232 +0000 UTC m=+161.736639277" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.486770 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.488518 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:12.98849875 +0000 UTC m=+162.246798885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.489318 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" event={"ID":"524b05ad-4b2c-4aa9-9851-5c0b4ee8556b","Type":"ContainerStarted","Data":"4286b811e698191eac22fcea43d940843391efa12f0c3728bb7c38a0eadbf1e5"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.489410 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.499684 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" event={"ID":"119dccaa-966c-49ef-8c37-d5cf86e23cf7","Type":"ContainerStarted","Data":"bec9516f53b59be92b4afb1e9d6a81e120a162dc55cb140040cecba12c07f233"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.500790 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.509336 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-7jcgx" podStartSLOduration=137.509319213 podStartE2EDuration="2m17.509319213s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.507066181 +0000 UTC m=+161.765366226" watchObservedRunningTime="2025-11-28 17:01:12.509319213 +0000 UTC m=+161.767619258" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.545320 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" event={"ID":"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84","Type":"ContainerStarted","Data":"0c189582edffb2e836b21b97833063aaf7dbbfe4c951858e6a681454624d2a9f"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.545364 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" event={"ID":"63bedb67-2a2d-4b3b-b28e-72cf2d8f0e84","Type":"ContainerStarted","Data":"f0dbbd738be94cc1468774c752c920e6c83e03ae2d59919804fdee7371a4aa48"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.585749 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" event={"ID":"55c290da-674e-4137-8fa3-97ea8353bf26","Type":"ContainerStarted","Data":"be6b827aa1b6ad956c1ad75347587c447efff8d3dff6046bb0c7451501d399e8"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.590506 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.591348 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.091334283 +0000 UTC m=+162.349634328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.610473 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" event={"ID":"93f56c4d-2217-41d4-82dc-aef9c5b5096e","Type":"ContainerStarted","Data":"2392ae82df261c6e9d1bd549afa68ed1f7267b5ce24a92f827bfd3aed6c64958"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.611405 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.612366 4710 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vbg64 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.612401 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" podUID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.628882 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" event={"ID":"ca70110e-404a-459f-adde-ca66c6bd8f74","Type":"ContainerStarted","Data":"92110a4af8c1a43aee89050f25a989e3827155bb9658a863164f167821ed27cd"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.640617 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-8thtd" podStartSLOduration=137.640601706 podStartE2EDuration="2m17.640601706s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.56305318 +0000 UTC m=+161.821353225" watchObservedRunningTime="2025-11-28 17:01:12.640601706 +0000 UTC m=+161.898901751" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.690419 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" event={"ID":"4b76130c-96ae-4153-b99c-b7e938e8b71c","Type":"ContainerStarted","Data":"bdf3cecf73bf2cf43a1497acd010f79740ffd2641d02619455581782bbd1a2d0"} Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.693356 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.694388 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.698170 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.198127615 +0000 UTC m=+162.456427660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.712560 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.720493 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-stcdf" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.720941 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-g7sjn" podStartSLOduration=137.720929592 podStartE2EDuration="2m17.720929592s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.64198223 +0000 UTC m=+161.900282275" watchObservedRunningTime="2025-11-28 17:01:12.720929592 +0000 UTC m=+161.979229627" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.721908 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" podStartSLOduration=137.721902113 podStartE2EDuration="2m17.721902113s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.712803839 +0000 UTC m=+161.971103884" watchObservedRunningTime="2025-11-28 17:01:12.721902113 +0000 UTC m=+161.980202158" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.723926 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-sgkms" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.770674 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-l5pfv" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.795575 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.795790 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.29576333 +0000 UTC m=+162.554063385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.796104 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.805097 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.305082611 +0000 UTC m=+162.563382646 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.805409 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:12 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:12 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:12 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.805444 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.809209 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-5hfcn" podStartSLOduration=137.809193835 podStartE2EDuration="2m17.809193835s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.801326 +0000 UTC m=+162.059626065" watchObservedRunningTime="2025-11-28 17:01:12.809193835 +0000 UTC m=+162.067493880" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.843945 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4bldc" podStartSLOduration=137.843928566 podStartE2EDuration="2m17.843928566s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.842074937 +0000 UTC m=+162.100374982" watchObservedRunningTime="2025-11-28 17:01:12.843928566 +0000 UTC m=+162.102228611" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.900519 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:12 crc kubenswrapper[4710]: E1128 17:01:12.901038 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.401018272 +0000 UTC m=+162.659318327 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.926075 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-557n9" podStartSLOduration=137.926059501 podStartE2EDuration="2m17.926059501s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.924427648 +0000 UTC m=+162.182727693" watchObservedRunningTime="2025-11-28 17:01:12.926059501 +0000 UTC m=+162.184359546" Nov 28 17:01:12 crc kubenswrapper[4710]: I1128 17:01:12.999211 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-k9mc2" podStartSLOduration=137.999194605 podStartE2EDuration="2m17.999194605s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:12.992802738 +0000 UTC m=+162.251102783" watchObservedRunningTime="2025-11-28 17:01:12.999194605 +0000 UTC m=+162.257494650" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.003092 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.003418 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.50340639 +0000 UTC m=+162.761706435 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.104129 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.104559 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.604524478 +0000 UTC m=+162.862824523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.104725 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.105281 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.605272213 +0000 UTC m=+162.863572258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.186704 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-282rn" podStartSLOduration=138.186682424 podStartE2EDuration="2m18.186682424s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.16489178 +0000 UTC m=+162.423191825" watchObservedRunningTime="2025-11-28 17:01:13.186682424 +0000 UTC m=+162.444982469" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.206030 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.206357 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.706341599 +0000 UTC m=+162.964641644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.273200 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" podStartSLOduration=138.273182309 podStartE2EDuration="2m18.273182309s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.272502697 +0000 UTC m=+162.530802752" watchObservedRunningTime="2025-11-28 17:01:13.273182309 +0000 UTC m=+162.531482354" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.274200 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nxlw9" podStartSLOduration=138.274194322 podStartE2EDuration="2m18.274194322s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.22803613 +0000 UTC m=+162.486336175" watchObservedRunningTime="2025-11-28 17:01:13.274194322 +0000 UTC m=+162.532494367" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.307973 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.308385 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.808370116 +0000 UTC m=+163.066670161 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.344392 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.344448 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.384700 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" podStartSLOduration=138.384676902 podStartE2EDuration="2m18.384676902s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.313849633 +0000 UTC m=+162.572149698" watchObservedRunningTime="2025-11-28 17:01:13.384676902 +0000 UTC m=+162.642976947" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.409861 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.410083 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.910065653 +0000 UTC m=+163.168365708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.410191 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.410534 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:13.910525628 +0000 UTC m=+163.168825683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.413222 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" podStartSLOduration=138.413207284 podStartE2EDuration="2m18.413207284s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.411953583 +0000 UTC m=+162.670253638" watchObservedRunningTime="2025-11-28 17:01:13.413207284 +0000 UTC m=+162.671507329" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.445565 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-swfl4" podStartSLOduration=138.445548069 podStartE2EDuration="2m18.445548069s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.443258295 +0000 UTC m=+162.701558340" watchObservedRunningTime="2025-11-28 17:01:13.445548069 +0000 UTC m=+162.703848114" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.474483 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-stcdf" podStartSLOduration=138.474467504 podStartE2EDuration="2m18.474467504s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.473591615 +0000 UTC m=+162.731891670" watchObservedRunningTime="2025-11-28 17:01:13.474467504 +0000 UTC m=+162.732767549" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.493682 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rzq8k" podStartSLOduration=138.493661974 podStartE2EDuration="2m18.493661974s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.49073911 +0000 UTC m=+162.749039155" watchObservedRunningTime="2025-11-28 17:01:13.493661974 +0000 UTC m=+162.751962019" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.510874 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.511164 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.011148099 +0000 UTC m=+163.269448154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.546571 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kn962"] Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.547496 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.549703 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.564040 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kn962"] Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.612371 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.612952 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.112930639 +0000 UTC m=+163.371230684 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.695613 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-2mtxd" event={"ID":"4b76130c-96ae-4153-b99c-b7e938e8b71c","Type":"ContainerStarted","Data":"9fc1ca49c4e9c15ccb5a6cf92d3a93a2ce1d7dafe895cae8bb43bf52d365363b"} Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.698301 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" event={"ID":"6fd0e719-abfd-4656-bacb-f003d9cee909","Type":"ContainerStarted","Data":"646e8d019441fa0f4ec55bfb87dd4e89a9d7d112e90c3d92181714ee43235d6a"} Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.700020 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" event={"ID":"639a9052-76a6-4248-99cd-4638000730de","Type":"ContainerStarted","Data":"75b1128ba034f0063753faa7bc02def42a1a388ecb1636d708da8b159a0108c5"} Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.701861 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" event={"ID":"af58f5bc-2ecb-49c9-91e5-dca036a205ef","Type":"ContainerStarted","Data":"5c8908c83964883c2bf5655a21219129562cec3c318391a55baf132b42d3232e"} Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.703819 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rk9hm" event={"ID":"4dc8675b-7fdd-4887-a46b-19bd9b3fb5bc","Type":"ContainerStarted","Data":"5a9827464a2f129a9d9150659f0c5422c3acbc282e2e67107dadf3fff3ff9e02"} Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.703914 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.706124 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" event={"ID":"6e3e3a1c-47ab-4aea-9a12-6323314ca17a","Type":"ContainerStarted","Data":"6c42cb5f0bc2f7e70a7f54ab0629886c0348bc017c2dd926686abc6e46379082"} Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.707296 4710 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vbg64 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.707335 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" podUID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.713325 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.713579 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-catalog-content\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.713608 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-utilities\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.713673 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99lj\" (UniqueName: \"kubernetes.io/projected/1f92a242-f0d2-495e-a018-1888abeedda2-kube-api-access-v99lj\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.713807 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.213788699 +0000 UTC m=+163.472088744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.724404 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ghnkd"] Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.725360 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.727292 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.750894 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" podStartSLOduration=139.750872496 podStartE2EDuration="2m19.750872496s" podCreationTimestamp="2025-11-28 16:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.739998505 +0000 UTC m=+162.998298560" watchObservedRunningTime="2025-11-28 17:01:13.750872496 +0000 UTC m=+163.009172541" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.753929 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghnkd"] Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.763784 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" podStartSLOduration=138.763739703 podStartE2EDuration="2m18.763739703s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.759989801 +0000 UTC m=+163.018289846" watchObservedRunningTime="2025-11-28 17:01:13.763739703 +0000 UTC m=+163.022039748" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.795651 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rk9hm" podStartSLOduration=8.795632163 podStartE2EDuration="8.795632163s" podCreationTimestamp="2025-11-28 17:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.791357035 +0000 UTC m=+163.049657070" watchObservedRunningTime="2025-11-28 17:01:13.795632163 +0000 UTC m=+163.053932208" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.800894 4710 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.805301 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:13 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:13 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:13 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.805366 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.816034 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99lj\" (UniqueName: \"kubernetes.io/projected/1f92a242-f0d2-495e-a018-1888abeedda2-kube-api-access-v99lj\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.816081 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-catalog-content\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.816186 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.816441 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-utilities\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.816802 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-catalog-content\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.816884 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqksm\" (UniqueName: \"kubernetes.io/projected/60f78884-95af-4b4f-bc63-66d8c883f9dc-kube-api-access-fqksm\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.816942 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-utilities\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.823401 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.323380969 +0000 UTC m=+163.581681014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.830402 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-utilities\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.832543 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-catalog-content\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.883078 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-n82pb" podStartSLOduration=138.883055178 podStartE2EDuration="2m18.883055178s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:13.862778223 +0000 UTC m=+163.121078268" watchObservedRunningTime="2025-11-28 17:01:13.883055178 +0000 UTC m=+163.141355223" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.890830 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99lj\" (UniqueName: \"kubernetes.io/projected/1f92a242-f0d2-495e-a018-1888abeedda2-kube-api-access-v99lj\") pod \"community-operators-kn962\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.920558 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.920867 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqksm\" (UniqueName: \"kubernetes.io/projected/60f78884-95af-4b4f-bc63-66d8c883f9dc-kube-api-access-fqksm\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.920960 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-catalog-content\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.921029 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-utilities\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.921536 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-utilities\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: E1128 17:01:13.921618 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.421601404 +0000 UTC m=+163.679901439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.922177 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-catalog-content\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.928948 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.936214 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nfs9g"] Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.937191 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.953641 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfs9g"] Nov 28 17:01:13 crc kubenswrapper[4710]: I1128 17:01:13.986837 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqksm\" (UniqueName: \"kubernetes.io/projected/60f78884-95af-4b4f-bc63-66d8c883f9dc-kube-api-access-fqksm\") pod \"certified-operators-ghnkd\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.022502 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.022555 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-utilities\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.022596 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnlff\" (UniqueName: \"kubernetes.io/projected/b69c848e-e4d1-45f3-8bd2-362ffbc93130-kube-api-access-pnlff\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.022629 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-catalog-content\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: E1128 17:01:14.022911 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.522900568 +0000 UTC m=+163.781200613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.043082 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.113969 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vrwkm"] Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.114841 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.123397 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:14 crc kubenswrapper[4710]: E1128 17:01:14.123560 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.623520709 +0000 UTC m=+163.881820754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.123598 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnlff\" (UniqueName: \"kubernetes.io/projected/b69c848e-e4d1-45f3-8bd2-362ffbc93130-kube-api-access-pnlff\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.123636 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-catalog-content\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.123705 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.123735 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-utilities\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: E1128 17:01:14.124030 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.624018516 +0000 UTC m=+163.882318561 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rtzhv" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.124201 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-catalog-content\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.124465 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-utilities\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.139909 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrwkm"] Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.153017 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnlff\" (UniqueName: \"kubernetes.io/projected/b69c848e-e4d1-45f3-8bd2-362ffbc93130-kube-api-access-pnlff\") pod \"community-operators-nfs9g\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.174446 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kn962" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.225232 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.225929 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-utilities\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.225993 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qld2f\" (UniqueName: \"kubernetes.io/projected/013bd749-c6a7-42af-9bf4-96a35c5fc718-kube-api-access-qld2f\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.226037 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-catalog-content\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: E1128 17:01:14.226247 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-28 17:01:14.726227558 +0000 UTC m=+163.984527603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.290061 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.291987 4710 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-28T17:01:13.800928024Z","Handler":null,"Name":""} Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.313524 4710 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.313563 4710 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.327477 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-utilities\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.327529 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qld2f\" (UniqueName: \"kubernetes.io/projected/013bd749-c6a7-42af-9bf4-96a35c5fc718-kube-api-access-qld2f\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.327555 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.327576 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-catalog-content\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.328048 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-catalog-content\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.328255 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-utilities\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.348956 4710 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.348998 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.407748 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qld2f\" (UniqueName: \"kubernetes.io/projected/013bd749-c6a7-42af-9bf4-96a35c5fc718-kube-api-access-qld2f\") pod \"certified-operators-vrwkm\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.441054 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.459691 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rtzhv\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.532252 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.578079 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.682330 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.730960 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ghnkd"] Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.751717 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" event={"ID":"639a9052-76a6-4248-99cd-4638000730de","Type":"ContainerStarted","Data":"b425097f5022ca8e59c6e832e8ae07528ac9f0987a01df278fcf4870eabb57af"} Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.760650 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.819988 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:14 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:14 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:14 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:14 crc kubenswrapper[4710]: I1128 17:01:14.820053 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.008935 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kn962"] Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.179038 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.206453 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rtzhv"] Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.258582 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nfs9g"] Nov 28 17:01:15 crc kubenswrapper[4710]: W1128 17:01:15.278808 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb69c848e_e4d1_45f3_8bd2_362ffbc93130.slice/crio-612502fab17c27687912197ef06486d1a42f708ecefccf4bf9ad1f442a23baa0 WatchSource:0}: Error finding container 612502fab17c27687912197ef06486d1a42f708ecefccf4bf9ad1f442a23baa0: Status 404 returned error can't find the container with id 612502fab17c27687912197ef06486d1a42f708ecefccf4bf9ad1f442a23baa0 Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.308299 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vrwkm"] Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.722084 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-89trk"] Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.723455 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.725736 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.738369 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89trk"] Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.760025 4710 generic.go:334] "Generic (PLEG): container finished" podID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerID="1b932af1acca9edb6112733ed0bea88c43cdc2d2cebdc1bd17f786c77fa46611" exitCode=0 Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.760742 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghnkd" event={"ID":"60f78884-95af-4b4f-bc63-66d8c883f9dc","Type":"ContainerDied","Data":"1b932af1acca9edb6112733ed0bea88c43cdc2d2cebdc1bd17f786c77fa46611"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.760805 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghnkd" event={"ID":"60f78884-95af-4b4f-bc63-66d8c883f9dc","Type":"ContainerStarted","Data":"bab4963b5423689bd83942ef0a87f968da41e27fd7050dce0c0f704a1eabe462"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.762889 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.764873 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" event={"ID":"639a9052-76a6-4248-99cd-4638000730de","Type":"ContainerStarted","Data":"935b4055d6dc7f64fa29d4de4327bcf1dd34f2ca0f576a514f33406e9d7d42d3"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.768256 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" event={"ID":"48374daa-0613-4fe0-94a5-311e48a3979f","Type":"ContainerStarted","Data":"65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.768316 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" event={"ID":"48374daa-0613-4fe0-94a5-311e48a3979f","Type":"ContainerStarted","Data":"3fe8f1b7873f9c01da6ae83529b9d567f4fc2c146b9368607638ba18510fe35d"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.768341 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.771219 4710 generic.go:334] "Generic (PLEG): container finished" podID="1f92a242-f0d2-495e-a018-1888abeedda2" containerID="6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f" exitCode=0 Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.771291 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kn962" event={"ID":"1f92a242-f0d2-495e-a018-1888abeedda2","Type":"ContainerDied","Data":"6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.771316 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kn962" event={"ID":"1f92a242-f0d2-495e-a018-1888abeedda2","Type":"ContainerStarted","Data":"ed53bc3511075a132cbc7981444d4e57ca8c9de371f90424b4e54a9415d24acc"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.775753 4710 generic.go:334] "Generic (PLEG): container finished" podID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerID="831dcb572c4de08ac72787a5e7697db73837496ecfb7593d8df6f62f54a09230" exitCode=0 Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.775827 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfs9g" event={"ID":"b69c848e-e4d1-45f3-8bd2-362ffbc93130","Type":"ContainerDied","Data":"831dcb572c4de08ac72787a5e7697db73837496ecfb7593d8df6f62f54a09230"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.775849 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfs9g" event={"ID":"b69c848e-e4d1-45f3-8bd2-362ffbc93130","Type":"ContainerStarted","Data":"612502fab17c27687912197ef06486d1a42f708ecefccf4bf9ad1f442a23baa0"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.785785 4710 generic.go:334] "Generic (PLEG): container finished" podID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerID="70f31202bcdf0b6c9c11589a203ea0a6fef80ede2bf57d4d5b241eae3d311f44" exitCode=0 Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.785885 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrwkm" event={"ID":"013bd749-c6a7-42af-9bf4-96a35c5fc718","Type":"ContainerDied","Data":"70f31202bcdf0b6c9c11589a203ea0a6fef80ede2bf57d4d5b241eae3d311f44"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.785936 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrwkm" event={"ID":"013bd749-c6a7-42af-9bf4-96a35c5fc718","Type":"ContainerStarted","Data":"1bd4dceaf02af15381f6ec5a90c9e71eee4a4ba499423dab9287d99ebb363dbf"} Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.796677 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2n4l4" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.805973 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:15 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:15 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:15 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.806035 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.827600 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hf7ls" podStartSLOduration=10.827581649 podStartE2EDuration="10.827581649s" podCreationTimestamp="2025-11-28 17:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:15.826557627 +0000 UTC m=+165.084857672" watchObservedRunningTime="2025-11-28 17:01:15.827581649 +0000 UTC m=+165.085881694" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.869522 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df2zs\" (UniqueName: \"kubernetes.io/projected/b967853a-325f-468f-8198-56df77075edf-kube-api-access-df2zs\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.869730 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-utilities\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.869833 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-catalog-content\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.873739 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" podStartSLOduration=140.873724391 podStartE2EDuration="2m20.873724391s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:15.871608673 +0000 UTC m=+165.129908738" watchObservedRunningTime="2025-11-28 17:01:15.873724391 +0000 UTC m=+165.132024446" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.971868 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-utilities\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.971977 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-catalog-content\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.972048 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df2zs\" (UniqueName: \"kubernetes.io/projected/b967853a-325f-468f-8198-56df77075edf-kube-api-access-df2zs\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.972897 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-utilities\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.973192 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-catalog-content\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:15 crc kubenswrapper[4710]: I1128 17:01:15.992029 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df2zs\" (UniqueName: \"kubernetes.io/projected/b967853a-325f-468f-8198-56df77075edf-kube-api-access-df2zs\") pod \"redhat-marketplace-89trk\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.036931 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.115825 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh6j"] Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.117189 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.126235 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh6j"] Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.175591 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-utilities\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.176066 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nfqx\" (UniqueName: \"kubernetes.io/projected/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-kube-api-access-8nfqx\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.176121 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-catalog-content\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.241846 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-89trk"] Nov 28 17:01:16 crc kubenswrapper[4710]: W1128 17:01:16.250900 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb967853a_325f_468f_8198_56df77075edf.slice/crio-77c9bc8d4bb537fcfa3a6e26654694d6bbd5ebe3c92ea3a8a3bb8414028271cd WatchSource:0}: Error finding container 77c9bc8d4bb537fcfa3a6e26654694d6bbd5ebe3c92ea3a8a3bb8414028271cd: Status 404 returned error can't find the container with id 77c9bc8d4bb537fcfa3a6e26654694d6bbd5ebe3c92ea3a8a3bb8414028271cd Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.278058 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-utilities\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.278137 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nfqx\" (UniqueName: \"kubernetes.io/projected/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-kube-api-access-8nfqx\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.278192 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-catalog-content\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.278899 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-catalog-content\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.279185 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-utilities\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.298369 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nfqx\" (UniqueName: \"kubernetes.io/projected/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-kube-api-access-8nfqx\") pod \"redhat-marketplace-gmh6j\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.444529 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.715145 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h6f9j"] Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.716088 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.718226 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.726385 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6f9j"] Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.786324 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkg7x\" (UniqueName: \"kubernetes.io/projected/e663d5c3-28d1-41de-bc55-18a61513b493-kube-api-access-nkg7x\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.786390 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-utilities\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.786482 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-catalog-content\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.792601 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89trk" event={"ID":"b967853a-325f-468f-8198-56df77075edf","Type":"ContainerStarted","Data":"77c9bc8d4bb537fcfa3a6e26654694d6bbd5ebe3c92ea3a8a3bb8414028271cd"} Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.813280 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:16 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:16 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:16 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.813338 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.847421 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.848187 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.850276 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.850620 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.856751 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.887855 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkg7x\" (UniqueName: \"kubernetes.io/projected/e663d5c3-28d1-41de-bc55-18a61513b493-kube-api-access-nkg7x\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.889021 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-utilities\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.889468 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-utilities\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.890198 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-catalog-content\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.890646 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-catalog-content\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.903390 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkg7x\" (UniqueName: \"kubernetes.io/projected/e663d5c3-28d1-41de-bc55-18a61513b493-kube-api-access-nkg7x\") pod \"redhat-operators-h6f9j\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.992361 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43660c71-590f-4119-8f77-b71cb349e7ce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"43660c71-590f-4119-8f77-b71cb349e7ce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:16 crc kubenswrapper[4710]: I1128 17:01:16.992458 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43660c71-590f-4119-8f77-b71cb349e7ce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"43660c71-590f-4119-8f77-b71cb349e7ce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.094323 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43660c71-590f-4119-8f77-b71cb349e7ce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"43660c71-590f-4119-8f77-b71cb349e7ce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.094544 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43660c71-590f-4119-8f77-b71cb349e7ce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"43660c71-590f-4119-8f77-b71cb349e7ce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.094724 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43660c71-590f-4119-8f77-b71cb349e7ce-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"43660c71-590f-4119-8f77-b71cb349e7ce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.116719 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.123317 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sqpcl"] Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.124326 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.139386 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43660c71-590f-4119-8f77-b71cb349e7ce-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"43660c71-590f-4119-8f77-b71cb349e7ce\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.167162 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.186193 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sqpcl"] Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.196129 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw86k\" (UniqueName: \"kubernetes.io/projected/648e6216-c033-4b77-8dbf-851bbc69edd6-kube-api-access-kw86k\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.196197 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-catalog-content\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.196253 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-utilities\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.273083 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.278980 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.280333 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.292902 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.292932 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.298544 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-utilities\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.298796 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.303778 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-utilities\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.304040 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw86k\" (UniqueName: \"kubernetes.io/projected/648e6216-c033-4b77-8dbf-851bbc69edd6-kube-api-access-kw86k\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.304063 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-catalog-content\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.304389 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-catalog-content\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.328807 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw86k\" (UniqueName: \"kubernetes.io/projected/648e6216-c033-4b77-8dbf-851bbc69edd6-kube-api-access-kw86k\") pod \"redhat-operators-sqpcl\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.367429 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.368211 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.370454 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.380398 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.390721 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.393936 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.394024 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.396702 4710 patch_prober.go:28] interesting pod/console-f9d7485db-z7cgp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.396782 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z7cgp" podUID="2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.488664 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h6f9j"] Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.508452 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.508796 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37f8964a-f406-4d76-ad10-83d8521e150c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"37f8964a-f406-4d76-ad10-83d8521e150c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.508930 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37f8964a-f406-4d76-ad10-83d8521e150c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"37f8964a-f406-4d76-ad10-83d8521e150c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.512916 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6cf6922-30b9-4011-a998-255a33c143df-metrics-certs\") pod \"network-metrics-daemon-pwn66\" (UID: \"a6cf6922-30b9-4011-a998-255a33c143df\") " pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.555184 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.610862 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37f8964a-f406-4d76-ad10-83d8521e150c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"37f8964a-f406-4d76-ad10-83d8521e150c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.610926 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37f8964a-f406-4d76-ad10-83d8521e150c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"37f8964a-f406-4d76-ad10-83d8521e150c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.611041 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37f8964a-f406-4d76-ad10-83d8521e150c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"37f8964a-f406-4d76-ad10-83d8521e150c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.629928 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh6j"] Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.649408 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37f8964a-f406-4d76-ad10-83d8521e150c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"37f8964a-f406-4d76-ad10-83d8521e150c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.665321 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-pwn66" Nov 28 17:01:17 crc kubenswrapper[4710]: W1128 17:01:17.673969 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c2c1123_1a92_4fc3_ae1b_f1472aaf2e63.slice/crio-1da82a3ad4c3898df749af2f4cfef87f70b120848157ec6431226e02945099b4 WatchSource:0}: Error finding container 1da82a3ad4c3898df749af2f4cfef87f70b120848157ec6431226e02945099b4: Status 404 returned error can't find the container with id 1da82a3ad4c3898df749af2f4cfef87f70b120848157ec6431226e02945099b4 Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.695365 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.758551 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 28 17:01:17 crc kubenswrapper[4710]: W1128 17:01:17.793404 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod43660c71_590f_4119_8f77_b71cb349e7ce.slice/crio-d719ee3a30a44dd08447d53c61328e9e3b8aa81c0ca3405db6462492814b84b6 WatchSource:0}: Error finding container d719ee3a30a44dd08447d53c61328e9e3b8aa81c0ca3405db6462492814b84b6: Status 404 returned error can't find the container with id d719ee3a30a44dd08447d53c61328e9e3b8aa81c0ca3405db6462492814b84b6 Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.799578 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.819037 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:17 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:17 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:17 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.819095 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.873864 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh6j" event={"ID":"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63","Type":"ContainerStarted","Data":"1da82a3ad4c3898df749af2f4cfef87f70b120848157ec6431226e02945099b4"} Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.888015 4710 generic.go:334] "Generic (PLEG): container finished" podID="b967853a-325f-468f-8198-56df77075edf" containerID="c8f03aa3ef4a910c622ea58c8ebefb4c3e1dfb4a61efd9a5d82cd36d74aa7ca5" exitCode=0 Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.888123 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89trk" event={"ID":"b967853a-325f-468f-8198-56df77075edf","Type":"ContainerDied","Data":"c8f03aa3ef4a910c622ea58c8ebefb4c3e1dfb4a61efd9a5d82cd36d74aa7ca5"} Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.909566 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6f9j" event={"ID":"e663d5c3-28d1-41de-bc55-18a61513b493","Type":"ContainerStarted","Data":"872e43683258385295355c04848f87688d15fb8e79e13a5fdab0e30e784de477"} Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.917395 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-z5klw" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.930322 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kq6jz" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.961862 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-282rn" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.969007 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sqpcl"] Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.989941 4710 patch_prober.go:28] interesting pod/downloads-7954f5f757-282rn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.989950 4710 patch_prober.go:28] interesting pod/downloads-7954f5f757-282rn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.989994 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-282rn" podUID="1688c24e-0457-4929-a3c8-5feb624c8b11" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 28 17:01:17 crc kubenswrapper[4710]: I1128 17:01:17.990016 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-282rn" podUID="1688c24e-0457-4929-a3c8-5feb624c8b11" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.006935 4710 patch_prober.go:28] interesting pod/downloads-7954f5f757-282rn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" start-of-body= Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.007002 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-282rn" podUID="1688c24e-0457-4929-a3c8-5feb624c8b11" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.11:8080/\": dial tcp 10.217.0.11:8080: connect: connection refused" Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.125207 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-pwn66"] Nov 28 17:01:18 crc kubenswrapper[4710]: W1128 17:01:18.204721 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6cf6922_30b9_4011_a998_255a33c143df.slice/crio-3a45a78c5231a909dca42b3c159d0d14226e13c6af3dbcd6402a53f534163547 WatchSource:0}: Error finding container 3a45a78c5231a909dca42b3c159d0d14226e13c6af3dbcd6402a53f534163547: Status 404 returned error can't find the container with id 3a45a78c5231a909dca42b3c159d0d14226e13c6af3dbcd6402a53f534163547 Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.445868 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 28 17:01:18 crc kubenswrapper[4710]: W1128 17:01:18.501826 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod37f8964a_f406_4d76_ad10_83d8521e150c.slice/crio-cc77eb4cbd0e2ee9adc8a081e5a4344b6c0e64de22ecc26c4343d156666d07fc WatchSource:0}: Error finding container cc77eb4cbd0e2ee9adc8a081e5a4344b6c0e64de22ecc26c4343d156666d07fc: Status 404 returned error can't find the container with id cc77eb4cbd0e2ee9adc8a081e5a4344b6c0e64de22ecc26c4343d156666d07fc Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.804596 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:18 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:18 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:18 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.805047 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.960330 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"37f8964a-f406-4d76-ad10-83d8521e150c","Type":"ContainerStarted","Data":"cc77eb4cbd0e2ee9adc8a081e5a4344b6c0e64de22ecc26c4343d156666d07fc"} Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.975870 4710 generic.go:334] "Generic (PLEG): container finished" podID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerID="061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677" exitCode=0 Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.976671 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh6j" event={"ID":"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63","Type":"ContainerDied","Data":"061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677"} Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.980926 4710 generic.go:334] "Generic (PLEG): container finished" podID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerID="b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139" exitCode=0 Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.980987 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqpcl" event={"ID":"648e6216-c033-4b77-8dbf-851bbc69edd6","Type":"ContainerDied","Data":"b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139"} Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.981023 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqpcl" event={"ID":"648e6216-c033-4b77-8dbf-851bbc69edd6","Type":"ContainerStarted","Data":"172d073109a074e907ed4ffcb123145a33a6e600b1d47661ec3452c71e59d08b"} Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.990922 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pwn66" event={"ID":"a6cf6922-30b9-4011-a998-255a33c143df","Type":"ContainerStarted","Data":"e4d5f0061205322e47168b6d68ecca801a41d7bc0527d9c3eef4b7e97a76cec1"} Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.990966 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pwn66" event={"ID":"a6cf6922-30b9-4011-a998-255a33c143df","Type":"ContainerStarted","Data":"3a45a78c5231a909dca42b3c159d0d14226e13c6af3dbcd6402a53f534163547"} Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.996106 4710 generic.go:334] "Generic (PLEG): container finished" podID="e663d5c3-28d1-41de-bc55-18a61513b493" containerID="22496dc2a13555c0cf665df9540e38f6ec94713bc4c34cb62fe9be73b05beb9b" exitCode=0 Nov 28 17:01:18 crc kubenswrapper[4710]: I1128 17:01:18.996171 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6f9j" event={"ID":"e663d5c3-28d1-41de-bc55-18a61513b493","Type":"ContainerDied","Data":"22496dc2a13555c0cf665df9540e38f6ec94713bc4c34cb62fe9be73b05beb9b"} Nov 28 17:01:19 crc kubenswrapper[4710]: I1128 17:01:19.003503 4710 generic.go:334] "Generic (PLEG): container finished" podID="9c920bc9-abe9-48c5-8124-f15727832b2e" containerID="46151858bd429571482abdab7da8861e36883fff6031ee4929027487a96115ed" exitCode=0 Nov 28 17:01:19 crc kubenswrapper[4710]: I1128 17:01:19.003624 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" event={"ID":"9c920bc9-abe9-48c5-8124-f15727832b2e","Type":"ContainerDied","Data":"46151858bd429571482abdab7da8861e36883fff6031ee4929027487a96115ed"} Nov 28 17:01:19 crc kubenswrapper[4710]: I1128 17:01:19.011619 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"43660c71-590f-4119-8f77-b71cb349e7ce","Type":"ContainerStarted","Data":"fef943687e45aa1fec598774f9daf9c36655ef226db4c251001583c86b47380d"} Nov 28 17:01:19 crc kubenswrapper[4710]: I1128 17:01:19.012368 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"43660c71-590f-4119-8f77-b71cb349e7ce","Type":"ContainerStarted","Data":"d719ee3a30a44dd08447d53c61328e9e3b8aa81c0ca3405db6462492814b84b6"} Nov 28 17:01:19 crc kubenswrapper[4710]: I1128 17:01:19.058219 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.058204144 podStartE2EDuration="3.058204144s" podCreationTimestamp="2025-11-28 17:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:19.054309148 +0000 UTC m=+168.312609193" watchObservedRunningTime="2025-11-28 17:01:19.058204144 +0000 UTC m=+168.316504179" Nov 28 17:01:19 crc kubenswrapper[4710]: I1128 17:01:19.804454 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:19 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:19 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:19 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:19 crc kubenswrapper[4710]: I1128 17:01:19.804851 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.055182 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-pwn66" event={"ID":"a6cf6922-30b9-4011-a998-255a33c143df","Type":"ContainerStarted","Data":"cc5f3be0fed142261d0ad336684e42a72c2e3a61f8b7036ee2e9790c6e10aaaa"} Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.063843 4710 generic.go:334] "Generic (PLEG): container finished" podID="43660c71-590f-4119-8f77-b71cb349e7ce" containerID="fef943687e45aa1fec598774f9daf9c36655ef226db4c251001583c86b47380d" exitCode=0 Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.063925 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"43660c71-590f-4119-8f77-b71cb349e7ce","Type":"ContainerDied","Data":"fef943687e45aa1fec598774f9daf9c36655ef226db4c251001583c86b47380d"} Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.080779 4710 generic.go:334] "Generic (PLEG): container finished" podID="37f8964a-f406-4d76-ad10-83d8521e150c" containerID="dac29351f15fdb6bd9c7ef8c0b0614e26ed15d124ef37205c657e83f086b1ea8" exitCode=0 Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.080866 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"37f8964a-f406-4d76-ad10-83d8521e150c","Type":"ContainerDied","Data":"dac29351f15fdb6bd9c7ef8c0b0614e26ed15d124ef37205c657e83f086b1ea8"} Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.103674 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-pwn66" podStartSLOduration=145.1036496 podStartE2EDuration="2m25.1036496s" podCreationTimestamp="2025-11-28 16:58:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:01:20.078222868 +0000 UTC m=+169.336522913" watchObservedRunningTime="2025-11-28 17:01:20.1036496 +0000 UTC m=+169.361949635" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.621505 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.688149 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c920bc9-abe9-48c5-8124-f15727832b2e-config-volume\") pod \"9c920bc9-abe9-48c5-8124-f15727832b2e\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.688237 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5zv\" (UniqueName: \"kubernetes.io/projected/9c920bc9-abe9-48c5-8124-f15727832b2e-kube-api-access-jz5zv\") pod \"9c920bc9-abe9-48c5-8124-f15727832b2e\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.688301 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c920bc9-abe9-48c5-8124-f15727832b2e-secret-volume\") pod \"9c920bc9-abe9-48c5-8124-f15727832b2e\" (UID: \"9c920bc9-abe9-48c5-8124-f15727832b2e\") " Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.689986 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c920bc9-abe9-48c5-8124-f15727832b2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "9c920bc9-abe9-48c5-8124-f15727832b2e" (UID: "9c920bc9-abe9-48c5-8124-f15727832b2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.696616 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c920bc9-abe9-48c5-8124-f15727832b2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9c920bc9-abe9-48c5-8124-f15727832b2e" (UID: "9c920bc9-abe9-48c5-8124-f15727832b2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.699338 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c920bc9-abe9-48c5-8124-f15727832b2e-kube-api-access-jz5zv" (OuterVolumeSpecName: "kube-api-access-jz5zv") pod "9c920bc9-abe9-48c5-8124-f15727832b2e" (UID: "9c920bc9-abe9-48c5-8124-f15727832b2e"). InnerVolumeSpecName "kube-api-access-jz5zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.790220 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5zv\" (UniqueName: \"kubernetes.io/projected/9c920bc9-abe9-48c5-8124-f15727832b2e-kube-api-access-jz5zv\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.790253 4710 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9c920bc9-abe9-48c5-8124-f15727832b2e-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.790263 4710 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c920bc9-abe9-48c5-8124-f15727832b2e-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.802398 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:20 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:20 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:20 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:20 crc kubenswrapper[4710]: I1128 17:01:20.802450 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.088890 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" event={"ID":"9c920bc9-abe9-48c5-8124-f15727832b2e","Type":"ContainerDied","Data":"db4de325a8a9dc14f4775c7498b5eeafcda02aa11151d02e56967b8bebf1e021"} Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.088943 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4de325a8a9dc14f4775c7498b5eeafcda02aa11151d02e56967b8bebf1e021" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.089017 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.345313 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.398640 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37f8964a-f406-4d76-ad10-83d8521e150c-kube-api-access\") pod \"37f8964a-f406-4d76-ad10-83d8521e150c\" (UID: \"37f8964a-f406-4d76-ad10-83d8521e150c\") " Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.398781 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37f8964a-f406-4d76-ad10-83d8521e150c-kubelet-dir\") pod \"37f8964a-f406-4d76-ad10-83d8521e150c\" (UID: \"37f8964a-f406-4d76-ad10-83d8521e150c\") " Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.399034 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/37f8964a-f406-4d76-ad10-83d8521e150c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "37f8964a-f406-4d76-ad10-83d8521e150c" (UID: "37f8964a-f406-4d76-ad10-83d8521e150c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.408931 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37f8964a-f406-4d76-ad10-83d8521e150c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "37f8964a-f406-4d76-ad10-83d8521e150c" (UID: "37f8964a-f406-4d76-ad10-83d8521e150c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.462865 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.501347 4710 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/37f8964a-f406-4d76-ad10-83d8521e150c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.501382 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/37f8964a-f406-4d76-ad10-83d8521e150c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.603023 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43660c71-590f-4119-8f77-b71cb349e7ce-kube-api-access\") pod \"43660c71-590f-4119-8f77-b71cb349e7ce\" (UID: \"43660c71-590f-4119-8f77-b71cb349e7ce\") " Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.603084 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43660c71-590f-4119-8f77-b71cb349e7ce-kubelet-dir\") pod \"43660c71-590f-4119-8f77-b71cb349e7ce\" (UID: \"43660c71-590f-4119-8f77-b71cb349e7ce\") " Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.603406 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43660c71-590f-4119-8f77-b71cb349e7ce-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "43660c71-590f-4119-8f77-b71cb349e7ce" (UID: "43660c71-590f-4119-8f77-b71cb349e7ce"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.606784 4710 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43660c71-590f-4119-8f77-b71cb349e7ce-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.610029 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43660c71-590f-4119-8f77-b71cb349e7ce-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "43660c71-590f-4119-8f77-b71cb349e7ce" (UID: "43660c71-590f-4119-8f77-b71cb349e7ce"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.708171 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43660c71-590f-4119-8f77-b71cb349e7ce-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.802705 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:21 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:21 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:21 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:21 crc kubenswrapper[4710]: I1128 17:01:21.802804 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:22 crc kubenswrapper[4710]: I1128 17:01:22.564351 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"43660c71-590f-4119-8f77-b71cb349e7ce","Type":"ContainerDied","Data":"d719ee3a30a44dd08447d53c61328e9e3b8aa81c0ca3405db6462492814b84b6"} Nov 28 17:01:22 crc kubenswrapper[4710]: I1128 17:01:22.564408 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d719ee3a30a44dd08447d53c61328e9e3b8aa81c0ca3405db6462492814b84b6" Nov 28 17:01:22 crc kubenswrapper[4710]: I1128 17:01:22.564373 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 28 17:01:22 crc kubenswrapper[4710]: I1128 17:01:22.583894 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"37f8964a-f406-4d76-ad10-83d8521e150c","Type":"ContainerDied","Data":"cc77eb4cbd0e2ee9adc8a081e5a4344b6c0e64de22ecc26c4343d156666d07fc"} Nov 28 17:01:22 crc kubenswrapper[4710]: I1128 17:01:22.583966 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc77eb4cbd0e2ee9adc8a081e5a4344b6c0e64de22ecc26c4343d156666d07fc" Nov 28 17:01:22 crc kubenswrapper[4710]: I1128 17:01:22.584021 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 28 17:01:22 crc kubenswrapper[4710]: I1128 17:01:22.804007 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:22 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:22 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:22 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:22 crc kubenswrapper[4710]: I1128 17:01:22.804072 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:23 crc kubenswrapper[4710]: I1128 17:01:23.971716 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:23 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:23 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:23 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:23 crc kubenswrapper[4710]: I1128 17:01:23.972135 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:24 crc kubenswrapper[4710]: I1128 17:01:24.011845 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rk9hm" Nov 28 17:01:24 crc kubenswrapper[4710]: I1128 17:01:24.802167 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:24 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:24 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:24 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:24 crc kubenswrapper[4710]: I1128 17:01:24.802383 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:25 crc kubenswrapper[4710]: I1128 17:01:25.801302 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:25 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:25 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:25 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:25 crc kubenswrapper[4710]: I1128 17:01:25.801555 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:26 crc kubenswrapper[4710]: I1128 17:01:26.803142 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:26 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:26 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:26 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:26 crc kubenswrapper[4710]: I1128 17:01:26.803222 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:27 crc kubenswrapper[4710]: I1128 17:01:27.395257 4710 patch_prober.go:28] interesting pod/console-f9d7485db-z7cgp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Nov 28 17:01:27 crc kubenswrapper[4710]: I1128 17:01:27.395609 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-z7cgp" podUID="2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Nov 28 17:01:27 crc kubenswrapper[4710]: I1128 17:01:27.802249 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:27 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:27 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:27 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:27 crc kubenswrapper[4710]: I1128 17:01:27.802314 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:27 crc kubenswrapper[4710]: I1128 17:01:27.966000 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-282rn" Nov 28 17:01:28 crc kubenswrapper[4710]: I1128 17:01:28.802362 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:28 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:28 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:28 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:28 crc kubenswrapper[4710]: I1128 17:01:28.802420 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:29 crc kubenswrapper[4710]: I1128 17:01:29.803054 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:29 crc kubenswrapper[4710]: [-]has-synced failed: reason withheld Nov 28 17:01:29 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:29 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:29 crc kubenswrapper[4710]: I1128 17:01:29.803105 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:30 crc kubenswrapper[4710]: I1128 17:01:30.802452 4710 patch_prober.go:28] interesting pod/router-default-5444994796-rfr7v container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 28 17:01:30 crc kubenswrapper[4710]: [+]has-synced ok Nov 28 17:01:30 crc kubenswrapper[4710]: [+]process-running ok Nov 28 17:01:30 crc kubenswrapper[4710]: healthz check failed Nov 28 17:01:30 crc kubenswrapper[4710]: I1128 17:01:30.802555 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-rfr7v" podUID="4c21068e-0ce0-4a6e-b41d-985df443a6a7" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:01:31 crc kubenswrapper[4710]: I1128 17:01:31.802836 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:31 crc kubenswrapper[4710]: I1128 17:01:31.806138 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-rfr7v" Nov 28 17:01:34 crc kubenswrapper[4710]: I1128 17:01:34.586626 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:01:37 crc kubenswrapper[4710]: I1128 17:01:37.399276 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:37 crc kubenswrapper[4710]: I1128 17:01:37.405743 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:01:43 crc kubenswrapper[4710]: I1128 17:01:43.344058 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:01:43 crc kubenswrapper[4710]: I1128 17:01:43.344461 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:01:48 crc kubenswrapper[4710]: I1128 17:01:48.140793 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-cflgb" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.423227 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 17:01:57 crc kubenswrapper[4710]: E1128 17:01:57.424262 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37f8964a-f406-4d76-ad10-83d8521e150c" containerName="pruner" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.424301 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="37f8964a-f406-4d76-ad10-83d8521e150c" containerName="pruner" Nov 28 17:01:57 crc kubenswrapper[4710]: E1128 17:01:57.424350 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43660c71-590f-4119-8f77-b71cb349e7ce" containerName="pruner" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.424364 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="43660c71-590f-4119-8f77-b71cb349e7ce" containerName="pruner" Nov 28 17:01:57 crc kubenswrapper[4710]: E1128 17:01:57.424380 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c920bc9-abe9-48c5-8124-f15727832b2e" containerName="collect-profiles" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.424394 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c920bc9-abe9-48c5-8124-f15727832b2e" containerName="collect-profiles" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.424611 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c920bc9-abe9-48c5-8124-f15727832b2e" containerName="collect-profiles" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.424634 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="37f8964a-f406-4d76-ad10-83d8521e150c" containerName="pruner" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.424650 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="43660c71-590f-4119-8f77-b71cb349e7ce" containerName="pruner" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.425298 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.429291 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.429841 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.437599 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.538732 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.539133 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.640430 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.640535 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.640696 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.675350 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:57 crc kubenswrapper[4710]: I1128 17:01:57.755516 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:01:58 crc kubenswrapper[4710]: E1128 17:01:58.660459 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 17:01:58 crc kubenswrapper[4710]: E1128 17:01:58.660863 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kw86k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-sqpcl_openshift-marketplace(648e6216-c033-4b77-8dbf-851bbc69edd6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:01:58 crc kubenswrapper[4710]: E1128 17:01:58.662077 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-sqpcl" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" Nov 28 17:02:01 crc kubenswrapper[4710]: I1128 17:02:01.822086 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 17:02:01 crc kubenswrapper[4710]: I1128 17:02:01.823141 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:01 crc kubenswrapper[4710]: I1128 17:02:01.845182 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.010457 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-var-lock\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.010802 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.011120 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.112681 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-var-lock\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.113026 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.113133 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.113251 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.112833 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-var-lock\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.139570 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kube-api-access\") pod \"installer-9-crc\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:02 crc kubenswrapper[4710]: I1128 17:02:02.146606 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:03 crc kubenswrapper[4710]: E1128 17:02:03.488690 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-sqpcl" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" Nov 28 17:02:03 crc kubenswrapper[4710]: E1128 17:02:03.570118 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 17:02:03 crc kubenswrapper[4710]: E1128 17:02:03.570296 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqksm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ghnkd_openshift-marketplace(60f78884-95af-4b4f-bc63-66d8c883f9dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:03 crc kubenswrapper[4710]: E1128 17:02:03.571840 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ghnkd" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" Nov 28 17:02:03 crc kubenswrapper[4710]: E1128 17:02:03.572218 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 28 17:02:03 crc kubenswrapper[4710]: E1128 17:02:03.572363 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nkg7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-h6f9j_openshift-marketplace(e663d5c3-28d1-41de-bc55-18a61513b493): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:03 crc kubenswrapper[4710]: E1128 17:02:03.573541 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-h6f9j" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" Nov 28 17:02:04 crc kubenswrapper[4710]: E1128 17:02:04.991832 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ghnkd" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" Nov 28 17:02:04 crc kubenswrapper[4710]: E1128 17:02:04.991843 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-h6f9j" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" Nov 28 17:02:05 crc kubenswrapper[4710]: E1128 17:02:05.059771 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 17:02:05 crc kubenswrapper[4710]: E1128 17:02:05.059912 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnlff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nfs9g_openshift-marketplace(b69c848e-e4d1-45f3-8bd2-362ffbc93130): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:05 crc kubenswrapper[4710]: E1128 17:02:05.061103 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nfs9g" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" Nov 28 17:02:06 crc kubenswrapper[4710]: E1128 17:02:06.129971 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nfs9g" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" Nov 28 17:02:06 crc kubenswrapper[4710]: I1128 17:02:06.394720 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 28 17:02:06 crc kubenswrapper[4710]: E1128 17:02:06.518605 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 17:02:06 crc kubenswrapper[4710]: E1128 17:02:06.521105 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nfqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-gmh6j_openshift-marketplace(9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:06 crc kubenswrapper[4710]: E1128 17:02:06.524534 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-gmh6j" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" Nov 28 17:02:06 crc kubenswrapper[4710]: I1128 17:02:06.541372 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 28 17:02:06 crc kubenswrapper[4710]: W1128 17:02:06.550893 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2ca1472a_cb3f_49dd_bc30_ab277096f0e0.slice/crio-2bd08c004c2d994be1ea9e35e5f7cd7fbe844fac6047e6ada6b0f882bc2e3cb4 WatchSource:0}: Error finding container 2bd08c004c2d994be1ea9e35e5f7cd7fbe844fac6047e6ada6b0f882bc2e3cb4: Status 404 returned error can't find the container with id 2bd08c004c2d994be1ea9e35e5f7cd7fbe844fac6047e6ada6b0f882bc2e3cb4 Nov 28 17:02:06 crc kubenswrapper[4710]: E1128 17:02:06.965833 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 28 17:02:06 crc kubenswrapper[4710]: E1128 17:02:06.966366 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v99lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kn962_openshift-marketplace(1f92a242-f0d2-495e-a018-1888abeedda2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:06 crc kubenswrapper[4710]: E1128 17:02:06.967912 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kn962" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.094675 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.094971 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qld2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vrwkm_openshift-marketplace(013bd749-c6a7-42af-9bf4-96a35c5fc718): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.097563 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vrwkm" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.171094 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.171323 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-df2zs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-89trk_openshift-marketplace(b967853a-325f-468f-8198-56df77075edf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.172569 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-89trk" podUID="b967853a-325f-468f-8198-56df77075edf" Nov 28 17:02:07 crc kubenswrapper[4710]: I1128 17:02:07.287607 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ca1472a-cb3f-49dd-bc30-ab277096f0e0","Type":"ContainerStarted","Data":"174f806b3b483309150188b14910e517268bd2cf06bc53cf4033b824d45a0543"} Nov 28 17:02:07 crc kubenswrapper[4710]: I1128 17:02:07.287673 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ca1472a-cb3f-49dd-bc30-ab277096f0e0","Type":"ContainerStarted","Data":"2bd08c004c2d994be1ea9e35e5f7cd7fbe844fac6047e6ada6b0f882bc2e3cb4"} Nov 28 17:02:07 crc kubenswrapper[4710]: I1128 17:02:07.290471 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cea7b5e9-e016-4724-85a2-a4bf2b623f1c","Type":"ContainerStarted","Data":"a2ae73fbe55e5ea21ac51b0ce502f115c0a3de9dd16013062ad3ad778f470715"} Nov 28 17:02:07 crc kubenswrapper[4710]: I1128 17:02:07.290521 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cea7b5e9-e016-4724-85a2-a4bf2b623f1c","Type":"ContainerStarted","Data":"cd0bc6d6a202d97d13e6083dbf75a97b37a235ed3c9e474b8c36f307a935e049"} Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.292512 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kn962" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.292531 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-gmh6j" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.294117 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vrwkm" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" Nov 28 17:02:07 crc kubenswrapper[4710]: E1128 17:02:07.296464 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-89trk" podUID="b967853a-325f-468f-8198-56df77075edf" Nov 28 17:02:07 crc kubenswrapper[4710]: I1128 17:02:07.302207 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.302188413 podStartE2EDuration="6.302188413s" podCreationTimestamp="2025-11-28 17:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:02:07.301735508 +0000 UTC m=+216.560035573" watchObservedRunningTime="2025-11-28 17:02:07.302188413 +0000 UTC m=+216.560488458" Nov 28 17:02:07 crc kubenswrapper[4710]: I1128 17:02:07.361325 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.361303301 podStartE2EDuration="10.361303301s" podCreationTimestamp="2025-11-28 17:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:02:07.36097962 +0000 UTC m=+216.619279665" watchObservedRunningTime="2025-11-28 17:02:07.361303301 +0000 UTC m=+216.619609706" Nov 28 17:02:08 crc kubenswrapper[4710]: I1128 17:02:08.298546 4710 generic.go:334] "Generic (PLEG): container finished" podID="cea7b5e9-e016-4724-85a2-a4bf2b623f1c" containerID="a2ae73fbe55e5ea21ac51b0ce502f115c0a3de9dd16013062ad3ad778f470715" exitCode=0 Nov 28 17:02:08 crc kubenswrapper[4710]: I1128 17:02:08.298841 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cea7b5e9-e016-4724-85a2-a4bf2b623f1c","Type":"ContainerDied","Data":"a2ae73fbe55e5ea21ac51b0ce502f115c0a3de9dd16013062ad3ad778f470715"} Nov 28 17:02:09 crc kubenswrapper[4710]: I1128 17:02:09.551099 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:02:09 crc kubenswrapper[4710]: I1128 17:02:09.646303 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kubelet-dir\") pod \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\" (UID: \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\") " Nov 28 17:02:09 crc kubenswrapper[4710]: I1128 17:02:09.646444 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cea7b5e9-e016-4724-85a2-a4bf2b623f1c" (UID: "cea7b5e9-e016-4724-85a2-a4bf2b623f1c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:09 crc kubenswrapper[4710]: I1128 17:02:09.646552 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kube-api-access\") pod \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\" (UID: \"cea7b5e9-e016-4724-85a2-a4bf2b623f1c\") " Nov 28 17:02:09 crc kubenswrapper[4710]: I1128 17:02:09.647118 4710 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:09 crc kubenswrapper[4710]: I1128 17:02:09.651727 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cea7b5e9-e016-4724-85a2-a4bf2b623f1c" (UID: "cea7b5e9-e016-4724-85a2-a4bf2b623f1c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:09 crc kubenswrapper[4710]: I1128 17:02:09.747629 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cea7b5e9-e016-4724-85a2-a4bf2b623f1c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:10 crc kubenswrapper[4710]: I1128 17:02:10.312054 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"cea7b5e9-e016-4724-85a2-a4bf2b623f1c","Type":"ContainerDied","Data":"cd0bc6d6a202d97d13e6083dbf75a97b37a235ed3c9e474b8c36f307a935e049"} Nov 28 17:02:10 crc kubenswrapper[4710]: I1128 17:02:10.312383 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd0bc6d6a202d97d13e6083dbf75a97b37a235ed3c9e474b8c36f307a935e049" Nov 28 17:02:10 crc kubenswrapper[4710]: I1128 17:02:10.312166 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 28 17:02:13 crc kubenswrapper[4710]: I1128 17:02:13.344571 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:02:13 crc kubenswrapper[4710]: I1128 17:02:13.345216 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:02:13 crc kubenswrapper[4710]: I1128 17:02:13.345266 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:02:13 crc kubenswrapper[4710]: I1128 17:02:13.345871 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:02:13 crc kubenswrapper[4710]: I1128 17:02:13.346006 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e" gracePeriod=600 Nov 28 17:02:14 crc kubenswrapper[4710]: I1128 17:02:14.332020 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e" exitCode=0 Nov 28 17:02:14 crc kubenswrapper[4710]: I1128 17:02:14.332059 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e"} Nov 28 17:02:15 crc kubenswrapper[4710]: I1128 17:02:15.339207 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"456a00d5cd0fbfc13a479799f023f2982c20805bb4d32bd660ed7b512390b959"} Nov 28 17:02:18 crc kubenswrapper[4710]: I1128 17:02:18.356266 4710 generic.go:334] "Generic (PLEG): container finished" podID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerID="8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4" exitCode=0 Nov 28 17:02:18 crc kubenswrapper[4710]: I1128 17:02:18.356359 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqpcl" event={"ID":"648e6216-c033-4b77-8dbf-851bbc69edd6","Type":"ContainerDied","Data":"8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4"} Nov 28 17:02:19 crc kubenswrapper[4710]: I1128 17:02:19.365637 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqpcl" event={"ID":"648e6216-c033-4b77-8dbf-851bbc69edd6","Type":"ContainerStarted","Data":"ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262"} Nov 28 17:02:19 crc kubenswrapper[4710]: I1128 17:02:19.369989 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfs9g" event={"ID":"b69c848e-e4d1-45f3-8bd2-362ffbc93130","Type":"ContainerStarted","Data":"77832f0138e6d9710ecaa8983f38c1ffae45ea5d6feeb2d01883250ab80f185a"} Nov 28 17:02:19 crc kubenswrapper[4710]: I1128 17:02:19.415294 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sqpcl" podStartSLOduration=2.414869149 podStartE2EDuration="1m2.415266116s" podCreationTimestamp="2025-11-28 17:01:17 +0000 UTC" firstStartedPulling="2025-11-28 17:01:18.982648163 +0000 UTC m=+168.240948208" lastFinishedPulling="2025-11-28 17:02:18.98304513 +0000 UTC m=+228.241345175" observedRunningTime="2025-11-28 17:02:19.389163264 +0000 UTC m=+228.647463309" watchObservedRunningTime="2025-11-28 17:02:19.415266116 +0000 UTC m=+228.673566161" Nov 28 17:02:20 crc kubenswrapper[4710]: I1128 17:02:20.376224 4710 generic.go:334] "Generic (PLEG): container finished" podID="b967853a-325f-468f-8198-56df77075edf" containerID="7fdb0e4744f023bd61e303dbc3693a2df176c61e452cc19f086d59491266ccf2" exitCode=0 Nov 28 17:02:20 crc kubenswrapper[4710]: I1128 17:02:20.376317 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89trk" event={"ID":"b967853a-325f-468f-8198-56df77075edf","Type":"ContainerDied","Data":"7fdb0e4744f023bd61e303dbc3693a2df176c61e452cc19f086d59491266ccf2"} Nov 28 17:02:20 crc kubenswrapper[4710]: I1128 17:02:20.379832 4710 generic.go:334] "Generic (PLEG): container finished" podID="e663d5c3-28d1-41de-bc55-18a61513b493" containerID="7cc6edc017f0e75f211c71d61683ddfa11ce70030897d05ba34622ff88927434" exitCode=0 Nov 28 17:02:20 crc kubenswrapper[4710]: I1128 17:02:20.379974 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6f9j" event={"ID":"e663d5c3-28d1-41de-bc55-18a61513b493","Type":"ContainerDied","Data":"7cc6edc017f0e75f211c71d61683ddfa11ce70030897d05ba34622ff88927434"} Nov 28 17:02:20 crc kubenswrapper[4710]: I1128 17:02:20.382514 4710 generic.go:334] "Generic (PLEG): container finished" podID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerID="77832f0138e6d9710ecaa8983f38c1ffae45ea5d6feeb2d01883250ab80f185a" exitCode=0 Nov 28 17:02:20 crc kubenswrapper[4710]: I1128 17:02:20.382558 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfs9g" event={"ID":"b69c848e-e4d1-45f3-8bd2-362ffbc93130","Type":"ContainerDied","Data":"77832f0138e6d9710ecaa8983f38c1ffae45ea5d6feeb2d01883250ab80f185a"} Nov 28 17:02:20 crc kubenswrapper[4710]: I1128 17:02:20.384818 4710 generic.go:334] "Generic (PLEG): container finished" podID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerID="95501947a922778792dd1a52906be2aabf163cb38ad048896a00e2cb9999390a" exitCode=0 Nov 28 17:02:20 crc kubenswrapper[4710]: I1128 17:02:20.384859 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrwkm" event={"ID":"013bd749-c6a7-42af-9bf4-96a35c5fc718","Type":"ContainerDied","Data":"95501947a922778792dd1a52906be2aabf163cb38ad048896a00e2cb9999390a"} Nov 28 17:02:21 crc kubenswrapper[4710]: I1128 17:02:21.394117 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6f9j" event={"ID":"e663d5c3-28d1-41de-bc55-18a61513b493","Type":"ContainerStarted","Data":"4a086ae99ebc1882a57d41cbe468d30ec1abd952c24a6e01209b3ec3e3aef0df"} Nov 28 17:02:21 crc kubenswrapper[4710]: I1128 17:02:21.397355 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfs9g" event={"ID":"b69c848e-e4d1-45f3-8bd2-362ffbc93130","Type":"ContainerStarted","Data":"d56e28f87ecd19ec0957ae19211f1dc2c4542f3887f118a4176fc96b37aa1afd"} Nov 28 17:02:21 crc kubenswrapper[4710]: I1128 17:02:21.399363 4710 generic.go:334] "Generic (PLEG): container finished" podID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerID="78e98da6c29429bbdeca120249e06a6ab5fa83a9230479da91e18cabc4977f38" exitCode=0 Nov 28 17:02:21 crc kubenswrapper[4710]: I1128 17:02:21.399413 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghnkd" event={"ID":"60f78884-95af-4b4f-bc63-66d8c883f9dc","Type":"ContainerDied","Data":"78e98da6c29429bbdeca120249e06a6ab5fa83a9230479da91e18cabc4977f38"} Nov 28 17:02:21 crc kubenswrapper[4710]: I1128 17:02:21.402172 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89trk" event={"ID":"b967853a-325f-468f-8198-56df77075edf","Type":"ContainerStarted","Data":"0ce0abb225786b6a0b3d7cdf18878af5036bd6820e250e6f8b7f9c1dba91ed91"} Nov 28 17:02:21 crc kubenswrapper[4710]: I1128 17:02:21.451158 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h6f9j" podStartSLOduration=2.300169554 podStartE2EDuration="1m5.451136909s" podCreationTimestamp="2025-11-28 17:01:16 +0000 UTC" firstStartedPulling="2025-11-28 17:01:17.923539185 +0000 UTC m=+167.181839230" lastFinishedPulling="2025-11-28 17:02:21.07450654 +0000 UTC m=+230.332806585" observedRunningTime="2025-11-28 17:02:21.415614699 +0000 UTC m=+230.673914744" watchObservedRunningTime="2025-11-28 17:02:21.451136909 +0000 UTC m=+230.709436964" Nov 28 17:02:21 crc kubenswrapper[4710]: I1128 17:02:21.491664 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-89trk" podStartSLOduration=3.177973769 podStartE2EDuration="1m6.491642227s" podCreationTimestamp="2025-11-28 17:01:15 +0000 UTC" firstStartedPulling="2025-11-28 17:01:17.892082068 +0000 UTC m=+167.150382113" lastFinishedPulling="2025-11-28 17:02:21.205750526 +0000 UTC m=+230.464050571" observedRunningTime="2025-11-28 17:02:21.454570475 +0000 UTC m=+230.712870520" watchObservedRunningTime="2025-11-28 17:02:21.491642227 +0000 UTC m=+230.749942282" Nov 28 17:02:21 crc kubenswrapper[4710]: I1128 17:02:21.522722 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nfs9g" podStartSLOduration=3.405223094 podStartE2EDuration="1m8.522701867s" podCreationTimestamp="2025-11-28 17:01:13 +0000 UTC" firstStartedPulling="2025-11-28 17:01:15.777820771 +0000 UTC m=+165.036120816" lastFinishedPulling="2025-11-28 17:02:20.895299544 +0000 UTC m=+230.153599589" observedRunningTime="2025-11-28 17:02:21.49202176 +0000 UTC m=+230.750321815" watchObservedRunningTime="2025-11-28 17:02:21.522701867 +0000 UTC m=+230.781001912" Nov 28 17:02:23 crc kubenswrapper[4710]: I1128 17:02:23.240390 4710 generic.go:334] "Generic (PLEG): container finished" podID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerID="3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e" exitCode=0 Nov 28 17:02:23 crc kubenswrapper[4710]: I1128 17:02:23.295449 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrwkm" event={"ID":"013bd749-c6a7-42af-9bf4-96a35c5fc718","Type":"ContainerStarted","Data":"6a11bb8783a728ab90c8854c574558a4887c8960799773860e96c9774258fda0"} Nov 28 17:02:23 crc kubenswrapper[4710]: I1128 17:02:23.296076 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh6j" event={"ID":"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63","Type":"ContainerDied","Data":"3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e"} Nov 28 17:02:23 crc kubenswrapper[4710]: I1128 17:02:23.324337 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vrwkm" podStartSLOduration=3.6820870230000002 podStartE2EDuration="1m9.324322164s" podCreationTimestamp="2025-11-28 17:01:14 +0000 UTC" firstStartedPulling="2025-11-28 17:01:15.787093361 +0000 UTC m=+165.045393396" lastFinishedPulling="2025-11-28 17:02:21.429328492 +0000 UTC m=+230.687628537" observedRunningTime="2025-11-28 17:02:23.323182965 +0000 UTC m=+232.581483010" watchObservedRunningTime="2025-11-28 17:02:23.324322164 +0000 UTC m=+232.582622209" Nov 28 17:02:24 crc kubenswrapper[4710]: I1128 17:02:24.246063 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghnkd" event={"ID":"60f78884-95af-4b4f-bc63-66d8c883f9dc","Type":"ContainerStarted","Data":"bb01b22a46ab966ff8d8ef8e3a93b7837522b9a0a5262094e8d6380ad004738d"} Nov 28 17:02:24 crc kubenswrapper[4710]: I1128 17:02:24.271223 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ghnkd" podStartSLOduration=5.184983336 podStartE2EDuration="1m11.271207524s" podCreationTimestamp="2025-11-28 17:01:13 +0000 UTC" firstStartedPulling="2025-11-28 17:01:15.762526828 +0000 UTC m=+165.020826873" lastFinishedPulling="2025-11-28 17:02:21.848751016 +0000 UTC m=+231.107051061" observedRunningTime="2025-11-28 17:02:24.26162334 +0000 UTC m=+233.519923385" watchObservedRunningTime="2025-11-28 17:02:24.271207524 +0000 UTC m=+233.529507569" Nov 28 17:02:24 crc kubenswrapper[4710]: I1128 17:02:24.290866 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:02:24 crc kubenswrapper[4710]: I1128 17:02:24.290914 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:02:24 crc kubenswrapper[4710]: I1128 17:02:24.442194 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:02:24 crc kubenswrapper[4710]: I1128 17:02:24.443183 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:02:25 crc kubenswrapper[4710]: I1128 17:02:25.393566 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nfs9g" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="registry-server" probeResult="failure" output=< Nov 28 17:02:25 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:02:25 crc kubenswrapper[4710]: > Nov 28 17:02:25 crc kubenswrapper[4710]: I1128 17:02:25.499895 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vrwkm" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="registry-server" probeResult="failure" output=< Nov 28 17:02:25 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:02:25 crc kubenswrapper[4710]: > Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.038254 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.038547 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.510884 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c9rg6"] Nov 28 17:02:26 crc kubenswrapper[4710]: E1128 17:02:26.511142 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea7b5e9-e016-4724-85a2-a4bf2b623f1c" containerName="pruner" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.511156 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea7b5e9-e016-4724-85a2-a4bf2b623f1c" containerName="pruner" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.511283 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea7b5e9-e016-4724-85a2-a4bf2b623f1c" containerName="pruner" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.511741 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.528569 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c9rg6"] Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.703247 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvrqc\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-kube-api-access-tvrqc\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.703299 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-registry-tls\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.703346 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.703397 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/606e7810-91c6-46a0-9a31-67713c3cfe5e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.703494 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/606e7810-91c6-46a0-9a31-67713c3cfe5e-registry-certificates\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.703566 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-bound-sa-token\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.703602 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/606e7810-91c6-46a0-9a31-67713c3cfe5e-trusted-ca\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.703626 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/606e7810-91c6-46a0-9a31-67713c3cfe5e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.728391 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.805203 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/606e7810-91c6-46a0-9a31-67713c3cfe5e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.805283 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/606e7810-91c6-46a0-9a31-67713c3cfe5e-registry-certificates\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.805357 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-bound-sa-token\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.805375 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/606e7810-91c6-46a0-9a31-67713c3cfe5e-trusted-ca\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.805389 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/606e7810-91c6-46a0-9a31-67713c3cfe5e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.805706 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/606e7810-91c6-46a0-9a31-67713c3cfe5e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.805815 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvrqc\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-kube-api-access-tvrqc\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.805842 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-registry-tls\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.806811 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/606e7810-91c6-46a0-9a31-67713c3cfe5e-trusted-ca\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.807216 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/606e7810-91c6-46a0-9a31-67713c3cfe5e-registry-certificates\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.812263 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/606e7810-91c6-46a0-9a31-67713c3cfe5e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.812299 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-registry-tls\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.820550 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-bound-sa-token\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.826369 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvrqc\" (UniqueName: \"kubernetes.io/projected/606e7810-91c6-46a0-9a31-67713c3cfe5e-kube-api-access-tvrqc\") pod \"image-registry-66df7c8f76-c9rg6\" (UID: \"606e7810-91c6-46a0-9a31-67713c3cfe5e\") " pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:26 crc kubenswrapper[4710]: I1128 17:02:26.829387 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:27 crc kubenswrapper[4710]: I1128 17:02:27.019642 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-c9rg6"] Nov 28 17:02:27 crc kubenswrapper[4710]: I1128 17:02:27.073463 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-89trk" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="registry-server" probeResult="failure" output=< Nov 28 17:02:27 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:02:27 crc kubenswrapper[4710]: > Nov 28 17:02:27 crc kubenswrapper[4710]: I1128 17:02:27.118179 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:02:27 crc kubenswrapper[4710]: I1128 17:02:27.118266 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:02:27 crc kubenswrapper[4710]: I1128 17:02:27.264568 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" event={"ID":"606e7810-91c6-46a0-9a31-67713c3cfe5e","Type":"ContainerStarted","Data":"bd6896fe5e125f70d93adc9532477ce035814c8c08761d9908d72595c25aee79"} Nov 28 17:02:27 crc kubenswrapper[4710]: I1128 17:02:27.556816 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:02:27 crc kubenswrapper[4710]: I1128 17:02:27.556865 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:02:27 crc kubenswrapper[4710]: I1128 17:02:27.625237 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:02:28 crc kubenswrapper[4710]: I1128 17:02:28.154977 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h6f9j" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="registry-server" probeResult="failure" output=< Nov 28 17:02:28 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:02:28 crc kubenswrapper[4710]: > Nov 28 17:02:28 crc kubenswrapper[4710]: I1128 17:02:28.273181 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" event={"ID":"606e7810-91c6-46a0-9a31-67713c3cfe5e","Type":"ContainerStarted","Data":"9e0fa913d80fdc8ced3df2c46358888d0331d45e53acab0f2f35a932eb72c449"} Nov 28 17:02:28 crc kubenswrapper[4710]: I1128 17:02:28.306025 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" podStartSLOduration=2.305995709 podStartE2EDuration="2.305995709s" podCreationTimestamp="2025-11-28 17:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:02:28.301149505 +0000 UTC m=+237.559449570" watchObservedRunningTime="2025-11-28 17:02:28.305995709 +0000 UTC m=+237.564295834" Nov 28 17:02:28 crc kubenswrapper[4710]: I1128 17:02:28.331750 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:02:29 crc kubenswrapper[4710]: I1128 17:02:29.279396 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:30 crc kubenswrapper[4710]: I1128 17:02:30.331229 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sqpcl"] Nov 28 17:02:30 crc kubenswrapper[4710]: I1128 17:02:30.331566 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sqpcl" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerName="registry-server" containerID="cri-o://ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262" gracePeriod=2 Nov 28 17:02:31 crc kubenswrapper[4710]: I1128 17:02:31.902862 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.079597 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-utilities\") pod \"648e6216-c033-4b77-8dbf-851bbc69edd6\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.079969 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw86k\" (UniqueName: \"kubernetes.io/projected/648e6216-c033-4b77-8dbf-851bbc69edd6-kube-api-access-kw86k\") pod \"648e6216-c033-4b77-8dbf-851bbc69edd6\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.080013 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-catalog-content\") pod \"648e6216-c033-4b77-8dbf-851bbc69edd6\" (UID: \"648e6216-c033-4b77-8dbf-851bbc69edd6\") " Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.081147 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-utilities" (OuterVolumeSpecName: "utilities") pod "648e6216-c033-4b77-8dbf-851bbc69edd6" (UID: "648e6216-c033-4b77-8dbf-851bbc69edd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.084562 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648e6216-c033-4b77-8dbf-851bbc69edd6-kube-api-access-kw86k" (OuterVolumeSpecName: "kube-api-access-kw86k") pod "648e6216-c033-4b77-8dbf-851bbc69edd6" (UID: "648e6216-c033-4b77-8dbf-851bbc69edd6"). InnerVolumeSpecName "kube-api-access-kw86k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.182041 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.182081 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw86k\" (UniqueName: \"kubernetes.io/projected/648e6216-c033-4b77-8dbf-851bbc69edd6-kube-api-access-kw86k\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.298319 4710 generic.go:334] "Generic (PLEG): container finished" podID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerID="ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262" exitCode=0 Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.298353 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqpcl" event={"ID":"648e6216-c033-4b77-8dbf-851bbc69edd6","Type":"ContainerDied","Data":"ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262"} Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.298382 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sqpcl" event={"ID":"648e6216-c033-4b77-8dbf-851bbc69edd6","Type":"ContainerDied","Data":"172d073109a074e907ed4ffcb123145a33a6e600b1d47661ec3452c71e59d08b"} Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.298402 4710 scope.go:117] "RemoveContainer" containerID="ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.298941 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sqpcl" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.584056 4710 scope.go:117] "RemoveContainer" containerID="8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.584595 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "648e6216-c033-4b77-8dbf-851bbc69edd6" (UID: "648e6216-c033-4b77-8dbf-851bbc69edd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.597280 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/648e6216-c033-4b77-8dbf-851bbc69edd6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.618639 4710 scope.go:117] "RemoveContainer" containerID="b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.645865 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sqpcl"] Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.648333 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sqpcl"] Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.660886 4710 scope.go:117] "RemoveContainer" containerID="ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262" Nov 28 17:02:32 crc kubenswrapper[4710]: E1128 17:02:32.661406 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262\": container with ID starting with ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262 not found: ID does not exist" containerID="ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.661445 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262"} err="failed to get container status \"ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262\": rpc error: code = NotFound desc = could not find container \"ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262\": container with ID starting with ff98cc2773156123fb0c529cc22574f487bb94646fc774cc75d5644ef6e48262 not found: ID does not exist" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.661485 4710 scope.go:117] "RemoveContainer" containerID="8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4" Nov 28 17:02:32 crc kubenswrapper[4710]: E1128 17:02:32.662059 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4\": container with ID starting with 8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4 not found: ID does not exist" containerID="8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.662088 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4"} err="failed to get container status \"8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4\": rpc error: code = NotFound desc = could not find container \"8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4\": container with ID starting with 8c94bb93dcf611911d577c8a0419d6d9836a1ad41be9dbe9b4dc542f2c3b61e4 not found: ID does not exist" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.662106 4710 scope.go:117] "RemoveContainer" containerID="b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139" Nov 28 17:02:32 crc kubenswrapper[4710]: E1128 17:02:32.664975 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139\": container with ID starting with b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139 not found: ID does not exist" containerID="b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139" Nov 28 17:02:32 crc kubenswrapper[4710]: I1128 17:02:32.665010 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139"} err="failed to get container status \"b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139\": rpc error: code = NotFound desc = could not find container \"b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139\": container with ID starting with b427ea12eba70508fb84ba513e71338789c4a127f2800695894adf6d29330139 not found: ID does not exist" Nov 28 17:02:33 crc kubenswrapper[4710]: I1128 17:02:33.147364 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" path="/var/lib/kubelet/pods/648e6216-c033-4b77-8dbf-851bbc69edd6/volumes" Nov 28 17:02:33 crc kubenswrapper[4710]: I1128 17:02:33.314259 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh6j" event={"ID":"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63","Type":"ContainerStarted","Data":"5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349"} Nov 28 17:02:33 crc kubenswrapper[4710]: I1128 17:02:33.319171 4710 generic.go:334] "Generic (PLEG): container finished" podID="1f92a242-f0d2-495e-a018-1888abeedda2" containerID="b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7" exitCode=0 Nov 28 17:02:33 crc kubenswrapper[4710]: I1128 17:02:33.319217 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kn962" event={"ID":"1f92a242-f0d2-495e-a018-1888abeedda2","Type":"ContainerDied","Data":"b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7"} Nov 28 17:02:33 crc kubenswrapper[4710]: I1128 17:02:33.337506 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gmh6j" podStartSLOduration=3.698575733 podStartE2EDuration="1m17.337487839s" podCreationTimestamp="2025-11-28 17:01:16 +0000 UTC" firstStartedPulling="2025-11-28 17:01:18.978814868 +0000 UTC m=+168.237114913" lastFinishedPulling="2025-11-28 17:02:32.617726974 +0000 UTC m=+241.876027019" observedRunningTime="2025-11-28 17:02:33.333118072 +0000 UTC m=+242.591418127" watchObservedRunningTime="2025-11-28 17:02:33.337487839 +0000 UTC m=+242.595787884" Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.044369 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.044806 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.109002 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.336005 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.393084 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.398979 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.495971 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-v7m54"] Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.498813 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:02:34 crc kubenswrapper[4710]: I1128 17:02:34.562929 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:02:35 crc kubenswrapper[4710]: I1128 17:02:35.929640 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfs9g"] Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.082508 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.122603 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.347312 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nfs9g" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="registry-server" containerID="cri-o://d56e28f87ecd19ec0957ae19211f1dc2c4542f3887f118a4176fc96b37aa1afd" gracePeriod=2 Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.445080 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.445241 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.492662 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.953393 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghnkd"] Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.953664 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ghnkd" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerName="registry-server" containerID="cri-o://bb01b22a46ab966ff8d8ef8e3a93b7837522b9a0a5262094e8d6380ad004738d" gracePeriod=30 Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.971121 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrwkm"] Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.974552 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vrwkm" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="registry-server" containerID="cri-o://6a11bb8783a728ab90c8854c574558a4887c8960799773860e96c9774258fda0" gracePeriod=30 Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.978782 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kn962"] Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.994413 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbg64"] Nov 28 17:02:36 crc kubenswrapper[4710]: I1128 17:02:36.994755 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" podUID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" containerName="marketplace-operator" containerID="cri-o://2392ae82df261c6e9d1bd549afa68ed1f7267b5ce24a92f827bfd3aed6c64958" gracePeriod=30 Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.007334 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89trk"] Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.014984 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh6j"] Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.028257 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4w9jc"] Nov 28 17:02:37 crc kubenswrapper[4710]: E1128 17:02:37.028508 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerName="registry-server" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.028524 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerName="registry-server" Nov 28 17:02:37 crc kubenswrapper[4710]: E1128 17:02:37.028539 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerName="extract-utilities" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.028546 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerName="extract-utilities" Nov 28 17:02:37 crc kubenswrapper[4710]: E1128 17:02:37.028559 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerName="extract-content" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.028565 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerName="extract-content" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.028672 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="648e6216-c033-4b77-8dbf-851bbc69edd6" containerName="registry-server" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.029073 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6f9j"] Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.029278 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h6f9j" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="registry-server" containerID="cri-o://4a086ae99ebc1882a57d41cbe468d30ec1abd952c24a6e01209b3ec3e3aef0df" gracePeriod=30 Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.029415 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.030008 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4w9jc"] Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.172744 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b297151b-94bd-4ed5-b889-511fc92fa343-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.173484 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms62h\" (UniqueName: \"kubernetes.io/projected/b297151b-94bd-4ed5-b889-511fc92fa343-kube-api-access-ms62h\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.173644 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b297151b-94bd-4ed5-b889-511fc92fa343-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.274318 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b297151b-94bd-4ed5-b889-511fc92fa343-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.274442 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms62h\" (UniqueName: \"kubernetes.io/projected/b297151b-94bd-4ed5-b889-511fc92fa343-kube-api-access-ms62h\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.274497 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b297151b-94bd-4ed5-b889-511fc92fa343-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.276302 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b297151b-94bd-4ed5-b889-511fc92fa343-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.280677 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b297151b-94bd-4ed5-b889-511fc92fa343-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.289500 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms62h\" (UniqueName: \"kubernetes.io/projected/b297151b-94bd-4ed5-b889-511fc92fa343-kube-api-access-ms62h\") pod \"marketplace-operator-79b997595-4w9jc\" (UID: \"b297151b-94bd-4ed5-b889-511fc92fa343\") " pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.358518 4710 generic.go:334] "Generic (PLEG): container finished" podID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerID="d56e28f87ecd19ec0957ae19211f1dc2c4542f3887f118a4176fc96b37aa1afd" exitCode=0 Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.358571 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfs9g" event={"ID":"b69c848e-e4d1-45f3-8bd2-362ffbc93130","Type":"ContainerDied","Data":"d56e28f87ecd19ec0957ae19211f1dc2c4542f3887f118a4176fc96b37aa1afd"} Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.359152 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-89trk" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="registry-server" containerID="cri-o://0ce0abb225786b6a0b3d7cdf18878af5036bd6820e250e6f8b7f9c1dba91ed91" gracePeriod=30 Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.394765 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.425537 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:37 crc kubenswrapper[4710]: I1128 17:02:37.626879 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4w9jc"] Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.159244 4710 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vbg64 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.159304 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" podUID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.331909 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrwkm"] Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.367143 4710 generic.go:334] "Generic (PLEG): container finished" podID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" containerID="2392ae82df261c6e9d1bd549afa68ed1f7267b5ce24a92f827bfd3aed6c64958" exitCode=0 Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.367219 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" event={"ID":"93f56c4d-2217-41d4-82dc-aef9c5b5096e","Type":"ContainerDied","Data":"2392ae82df261c6e9d1bd549afa68ed1f7267b5ce24a92f827bfd3aed6c64958"} Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.370548 4710 generic.go:334] "Generic (PLEG): container finished" podID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerID="6a11bb8783a728ab90c8854c574558a4887c8960799773860e96c9774258fda0" exitCode=0 Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.370831 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrwkm" event={"ID":"013bd749-c6a7-42af-9bf4-96a35c5fc718","Type":"ContainerDied","Data":"6a11bb8783a728ab90c8854c574558a4887c8960799773860e96c9774258fda0"} Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.374534 4710 generic.go:334] "Generic (PLEG): container finished" podID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerID="bb01b22a46ab966ff8d8ef8e3a93b7837522b9a0a5262094e8d6380ad004738d" exitCode=0 Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.374632 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghnkd" event={"ID":"60f78884-95af-4b4f-bc63-66d8c883f9dc","Type":"ContainerDied","Data":"bb01b22a46ab966ff8d8ef8e3a93b7837522b9a0a5262094e8d6380ad004738d"} Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.375549 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" event={"ID":"b297151b-94bd-4ed5-b889-511fc92fa343","Type":"ContainerStarted","Data":"ea18bdc228344cdc12f5cf6d46366fde4e1de59bcfeadc890330fd1a7cceff1b"} Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.376728 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.377547 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kn962" event={"ID":"1f92a242-f0d2-495e-a018-1888abeedda2","Type":"ContainerStarted","Data":"e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae"} Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.379812 4710 generic.go:334] "Generic (PLEG): container finished" podID="e663d5c3-28d1-41de-bc55-18a61513b493" containerID="4a086ae99ebc1882a57d41cbe468d30ec1abd952c24a6e01209b3ec3e3aef0df" exitCode=0 Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.379885 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6f9j" event={"ID":"e663d5c3-28d1-41de-bc55-18a61513b493","Type":"ContainerDied","Data":"4a086ae99ebc1882a57d41cbe468d30ec1abd952c24a6e01209b3ec3e3aef0df"} Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.380058 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gmh6j" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerName="registry-server" containerID="cri-o://5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349" gracePeriod=30 Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.489712 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-utilities\") pod \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.489868 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnlff\" (UniqueName: \"kubernetes.io/projected/b69c848e-e4d1-45f3-8bd2-362ffbc93130-kube-api-access-pnlff\") pod \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.489942 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-catalog-content\") pod \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\" (UID: \"b69c848e-e4d1-45f3-8bd2-362ffbc93130\") " Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.490573 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-utilities" (OuterVolumeSpecName: "utilities") pod "b69c848e-e4d1-45f3-8bd2-362ffbc93130" (UID: "b69c848e-e4d1-45f3-8bd2-362ffbc93130"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.495264 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b69c848e-e4d1-45f3-8bd2-362ffbc93130-kube-api-access-pnlff" (OuterVolumeSpecName: "kube-api-access-pnlff") pod "b69c848e-e4d1-45f3-8bd2-362ffbc93130" (UID: "b69c848e-e4d1-45f3-8bd2-362ffbc93130"). InnerVolumeSpecName "kube-api-access-pnlff". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.552981 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b69c848e-e4d1-45f3-8bd2-362ffbc93130" (UID: "b69c848e-e4d1-45f3-8bd2-362ffbc93130"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.591382 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.591413 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b69c848e-e4d1-45f3-8bd2-362ffbc93130-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:38 crc kubenswrapper[4710]: I1128 17:02:38.591428 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnlff\" (UniqueName: \"kubernetes.io/projected/b69c848e-e4d1-45f3-8bd2-362ffbc93130-kube-api-access-pnlff\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.298549 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.399344 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-catalog-content\") pod \"60f78884-95af-4b4f-bc63-66d8c883f9dc\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.399411 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqksm\" (UniqueName: \"kubernetes.io/projected/60f78884-95af-4b4f-bc63-66d8c883f9dc-kube-api-access-fqksm\") pod \"60f78884-95af-4b4f-bc63-66d8c883f9dc\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.399517 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-utilities\") pod \"60f78884-95af-4b4f-bc63-66d8c883f9dc\" (UID: \"60f78884-95af-4b4f-bc63-66d8c883f9dc\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.400598 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-utilities" (OuterVolumeSpecName: "utilities") pod "60f78884-95af-4b4f-bc63-66d8c883f9dc" (UID: "60f78884-95af-4b4f-bc63-66d8c883f9dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.413983 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60f78884-95af-4b4f-bc63-66d8c883f9dc-kube-api-access-fqksm" (OuterVolumeSpecName: "kube-api-access-fqksm") pod "60f78884-95af-4b4f-bc63-66d8c883f9dc" (UID: "60f78884-95af-4b4f-bc63-66d8c883f9dc"). InnerVolumeSpecName "kube-api-access-fqksm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.448875 4710 generic.go:334] "Generic (PLEG): container finished" podID="b967853a-325f-468f-8198-56df77075edf" containerID="0ce0abb225786b6a0b3d7cdf18878af5036bd6820e250e6f8b7f9c1dba91ed91" exitCode=0 Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.448966 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89trk" event={"ID":"b967853a-325f-468f-8198-56df77075edf","Type":"ContainerDied","Data":"0ce0abb225786b6a0b3d7cdf18878af5036bd6820e250e6f8b7f9c1dba91ed91"} Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.471189 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" event={"ID":"b297151b-94bd-4ed5-b889-511fc92fa343","Type":"ContainerStarted","Data":"a3f59656631554bb00b54a07d6b7c877f5f60c0af0ba15f54d4aaf3db75825e5"} Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.478163 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nfs9g" event={"ID":"b69c848e-e4d1-45f3-8bd2-362ffbc93130","Type":"ContainerDied","Data":"612502fab17c27687912197ef06486d1a42f708ecefccf4bf9ad1f442a23baa0"} Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.478262 4710 scope.go:117] "RemoveContainer" containerID="d56e28f87ecd19ec0957ae19211f1dc2c4542f3887f118a4176fc96b37aa1afd" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.478458 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nfs9g" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.483713 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ghnkd" event={"ID":"60f78884-95af-4b4f-bc63-66d8c883f9dc","Type":"ContainerDied","Data":"bab4963b5423689bd83942ef0a87f968da41e27fd7050dce0c0f704a1eabe462"} Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.483962 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ghnkd" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.495491 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nfs9g"] Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.502445 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqksm\" (UniqueName: \"kubernetes.io/projected/60f78884-95af-4b4f-bc63-66d8c883f9dc-kube-api-access-fqksm\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.502471 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.504415 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nfs9g"] Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.507949 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60f78884-95af-4b4f-bc63-66d8c883f9dc" (UID: "60f78884-95af-4b4f-bc63-66d8c883f9dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.517435 4710 scope.go:117] "RemoveContainer" containerID="77832f0138e6d9710ecaa8983f38c1ffae45ea5d6feeb2d01883250ab80f185a" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.517635 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.569257 4710 scope.go:117] "RemoveContainer" containerID="831dcb572c4de08ac72787a5e7697db73837496ecfb7593d8df6f62f54a09230" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.598334 4710 scope.go:117] "RemoveContainer" containerID="bb01b22a46ab966ff8d8ef8e3a93b7837522b9a0a5262094e8d6380ad004738d" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.605015 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60f78884-95af-4b4f-bc63-66d8c883f9dc-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.630925 4710 scope.go:117] "RemoveContainer" containerID="78e98da6c29429bbdeca120249e06a6ab5fa83a9230479da91e18cabc4977f38" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.694414 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.698088 4710 scope.go:117] "RemoveContainer" containerID="1b932af1acca9edb6112733ed0bea88c43cdc2d2cebdc1bd17f786c77fa46611" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.708426 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qld2f\" (UniqueName: \"kubernetes.io/projected/013bd749-c6a7-42af-9bf4-96a35c5fc718-kube-api-access-qld2f\") pod \"013bd749-c6a7-42af-9bf4-96a35c5fc718\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.708604 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-utilities\") pod \"013bd749-c6a7-42af-9bf4-96a35c5fc718\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.708649 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-catalog-content\") pod \"013bd749-c6a7-42af-9bf4-96a35c5fc718\" (UID: \"013bd749-c6a7-42af-9bf4-96a35c5fc718\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.711075 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-utilities" (OuterVolumeSpecName: "utilities") pod "013bd749-c6a7-42af-9bf4-96a35c5fc718" (UID: "013bd749-c6a7-42af-9bf4-96a35c5fc718"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.725090 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013bd749-c6a7-42af-9bf4-96a35c5fc718-kube-api-access-qld2f" (OuterVolumeSpecName: "kube-api-access-qld2f") pod "013bd749-c6a7-42af-9bf4-96a35c5fc718" (UID: "013bd749-c6a7-42af-9bf4-96a35c5fc718"). InnerVolumeSpecName "kube-api-access-qld2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.753527 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.778970 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "013bd749-c6a7-42af-9bf4-96a35c5fc718" (UID: "013bd749-c6a7-42af-9bf4-96a35c5fc718"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.811084 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvk6b\" (UniqueName: \"kubernetes.io/projected/93f56c4d-2217-41d4-82dc-aef9c5b5096e-kube-api-access-wvk6b\") pod \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.811130 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-operator-metrics\") pod \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.811202 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-trusted-ca\") pod \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\" (UID: \"93f56c4d-2217-41d4-82dc-aef9c5b5096e\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.811492 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qld2f\" (UniqueName: \"kubernetes.io/projected/013bd749-c6a7-42af-9bf4-96a35c5fc718-kube-api-access-qld2f\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.811505 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.811515 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/013bd749-c6a7-42af-9bf4-96a35c5fc718-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.813303 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "93f56c4d-2217-41d4-82dc-aef9c5b5096e" (UID: "93f56c4d-2217-41d4-82dc-aef9c5b5096e"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.816517 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "93f56c4d-2217-41d4-82dc-aef9c5b5096e" (UID: "93f56c4d-2217-41d4-82dc-aef9c5b5096e"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.818836 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ghnkd"] Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.819105 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f56c4d-2217-41d4-82dc-aef9c5b5096e-kube-api-access-wvk6b" (OuterVolumeSpecName: "kube-api-access-wvk6b") pod "93f56c4d-2217-41d4-82dc-aef9c5b5096e" (UID: "93f56c4d-2217-41d4-82dc-aef9c5b5096e"). InnerVolumeSpecName "kube-api-access-wvk6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.823120 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ghnkd"] Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.911909 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-catalog-content\") pod \"e663d5c3-28d1-41de-bc55-18a61513b493\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.911967 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkg7x\" (UniqueName: \"kubernetes.io/projected/e663d5c3-28d1-41de-bc55-18a61513b493-kube-api-access-nkg7x\") pod \"e663d5c3-28d1-41de-bc55-18a61513b493\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.912003 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-utilities\") pod \"e663d5c3-28d1-41de-bc55-18a61513b493\" (UID: \"e663d5c3-28d1-41de-bc55-18a61513b493\") " Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.912287 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvk6b\" (UniqueName: \"kubernetes.io/projected/93f56c4d-2217-41d4-82dc-aef9c5b5096e-kube-api-access-wvk6b\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.912303 4710 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.912315 4710 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/93f56c4d-2217-41d4-82dc-aef9c5b5096e-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.913087 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-utilities" (OuterVolumeSpecName: "utilities") pod "e663d5c3-28d1-41de-bc55-18a61513b493" (UID: "e663d5c3-28d1-41de-bc55-18a61513b493"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:39 crc kubenswrapper[4710]: I1128 17:02:39.916460 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e663d5c3-28d1-41de-bc55-18a61513b493-kube-api-access-nkg7x" (OuterVolumeSpecName: "kube-api-access-nkg7x") pod "e663d5c3-28d1-41de-bc55-18a61513b493" (UID: "e663d5c3-28d1-41de-bc55-18a61513b493"). InnerVolumeSpecName "kube-api-access-nkg7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.013586 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkg7x\" (UniqueName: \"kubernetes.io/projected/e663d5c3-28d1-41de-bc55-18a61513b493-kube-api-access-nkg7x\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.013623 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.015653 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e663d5c3-28d1-41de-bc55-18a61513b493" (UID: "e663d5c3-28d1-41de-bc55-18a61513b493"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.114997 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e663d5c3-28d1-41de-bc55-18a61513b493-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.400025 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.419927 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.493063 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vrwkm" event={"ID":"013bd749-c6a7-42af-9bf4-96a35c5fc718","Type":"ContainerDied","Data":"1bd4dceaf02af15381f6ec5a90c9e71eee4a4ba499423dab9287d99ebb363dbf"} Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.493127 4710 scope.go:117] "RemoveContainer" containerID="6a11bb8783a728ab90c8854c574558a4887c8960799773860e96c9774258fda0" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.493076 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vrwkm" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.496858 4710 generic.go:334] "Generic (PLEG): container finished" podID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerID="5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349" exitCode=0 Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.496918 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh6j" event={"ID":"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63","Type":"ContainerDied","Data":"5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349"} Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.496957 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmh6j" event={"ID":"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63","Type":"ContainerDied","Data":"1da82a3ad4c3898df749af2f4cfef87f70b120848157ec6431226e02945099b4"} Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.497005 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmh6j" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.503381 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-89trk" event={"ID":"b967853a-325f-468f-8198-56df77075edf","Type":"ContainerDied","Data":"77c9bc8d4bb537fcfa3a6e26654694d6bbd5ebe3c92ea3a8a3bb8414028271cd"} Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.503496 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-89trk" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.509922 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h6f9j" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.510665 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h6f9j" event={"ID":"e663d5c3-28d1-41de-bc55-18a61513b493","Type":"ContainerDied","Data":"872e43683258385295355c04848f87688d15fb8e79e13a5fdab0e30e784de477"} Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.512605 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kn962" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" containerName="registry-server" containerID="cri-o://e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae" gracePeriod=30 Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.513131 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.515434 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vbg64" event={"ID":"93f56c4d-2217-41d4-82dc-aef9c5b5096e","Type":"ContainerDied","Data":"6dc44c8c67d26301267d670a7e49ff4f2cfca7a97f8f519941e39e329b413712"} Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.515520 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.522009 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.523654 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vrwkm"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.524199 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nfqx\" (UniqueName: \"kubernetes.io/projected/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-kube-api-access-8nfqx\") pod \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.524256 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-catalog-content\") pod \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.524293 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-utilities\") pod \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\" (UID: \"9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.524463 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-catalog-content\") pod \"b967853a-325f-468f-8198-56df77075edf\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.524555 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-utilities\") pod \"b967853a-325f-468f-8198-56df77075edf\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.524593 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df2zs\" (UniqueName: \"kubernetes.io/projected/b967853a-325f-468f-8198-56df77075edf-kube-api-access-df2zs\") pod \"b967853a-325f-468f-8198-56df77075edf\" (UID: \"b967853a-325f-468f-8198-56df77075edf\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.530512 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b967853a-325f-468f-8198-56df77075edf-kube-api-access-df2zs" (OuterVolumeSpecName: "kube-api-access-df2zs") pod "b967853a-325f-468f-8198-56df77075edf" (UID: "b967853a-325f-468f-8198-56df77075edf"). InnerVolumeSpecName "kube-api-access-df2zs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.532000 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-utilities" (OuterVolumeSpecName: "utilities") pod "b967853a-325f-468f-8198-56df77075edf" (UID: "b967853a-325f-468f-8198-56df77075edf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.532111 4710 scope.go:117] "RemoveContainer" containerID="95501947a922778792dd1a52906be2aabf163cb38ad048896a00e2cb9999390a" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.534889 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-utilities" (OuterVolumeSpecName: "utilities") pod "9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" (UID: "9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.535924 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.535958 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df2zs\" (UniqueName: \"kubernetes.io/projected/b967853a-325f-468f-8198-56df77075edf-kube-api-access-df2zs\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.539467 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-kube-api-access-8nfqx" (OuterVolumeSpecName: "kube-api-access-8nfqx") pod "9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" (UID: "9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63"). InnerVolumeSpecName "kube-api-access-8nfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.543169 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vrwkm"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.549985 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" (UID: "9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.550959 4710 scope.go:117] "RemoveContainer" containerID="70f31202bcdf0b6c9c11589a203ea0a6fef80ede2bf57d4d5b241eae3d311f44" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.558200 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b967853a-325f-468f-8198-56df77075edf" (UID: "b967853a-325f-468f-8198-56df77075edf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.561157 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kn962" podStartSLOduration=6.401776752 podStartE2EDuration="1m27.561128513s" podCreationTimestamp="2025-11-28 17:01:13 +0000 UTC" firstStartedPulling="2025-11-28 17:01:15.77250569 +0000 UTC m=+165.030805735" lastFinishedPulling="2025-11-28 17:02:36.931857451 +0000 UTC m=+246.190157496" observedRunningTime="2025-11-28 17:02:40.560894565 +0000 UTC m=+249.819194630" watchObservedRunningTime="2025-11-28 17:02:40.561128513 +0000 UTC m=+249.819428568" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.585049 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4w9jc" podStartSLOduration=4.58502772 podStartE2EDuration="4.58502772s" podCreationTimestamp="2025-11-28 17:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:02:40.578389227 +0000 UTC m=+249.836689282" watchObservedRunningTime="2025-11-28 17:02:40.58502772 +0000 UTC m=+249.843327765" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.608938 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbg64"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.611299 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vbg64"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.616631 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h6f9j"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.618913 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h6f9j"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.637476 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nfqx\" (UniqueName: \"kubernetes.io/projected/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-kube-api-access-8nfqx\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.637522 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.637531 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.637541 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b967853a-325f-468f-8198-56df77075edf-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.661938 4710 scope.go:117] "RemoveContainer" containerID="5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.681137 4710 scope.go:117] "RemoveContainer" containerID="3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.694821 4710 scope.go:117] "RemoveContainer" containerID="061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.711984 4710 scope.go:117] "RemoveContainer" containerID="5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349" Nov 28 17:02:40 crc kubenswrapper[4710]: E1128 17:02:40.712387 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349\": container with ID starting with 5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349 not found: ID does not exist" containerID="5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.712414 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349"} err="failed to get container status \"5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349\": rpc error: code = NotFound desc = could not find container \"5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349\": container with ID starting with 5faf249a6c2c6a43d5c4741b8f5f7385058277a96b6a66b47417e42884db0349 not found: ID does not exist" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.712433 4710 scope.go:117] "RemoveContainer" containerID="3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e" Nov 28 17:02:40 crc kubenswrapper[4710]: E1128 17:02:40.713150 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e\": container with ID starting with 3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e not found: ID does not exist" containerID="3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.713174 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e"} err="failed to get container status \"3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e\": rpc error: code = NotFound desc = could not find container \"3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e\": container with ID starting with 3c81905350aab0b9edc4f2ecea2cd94919d893f3484dc5ec4fd7058ec215e38e not found: ID does not exist" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.713189 4710 scope.go:117] "RemoveContainer" containerID="061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677" Nov 28 17:02:40 crc kubenswrapper[4710]: E1128 17:02:40.713662 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677\": container with ID starting with 061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677 not found: ID does not exist" containerID="061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.713680 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677"} err="failed to get container status \"061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677\": rpc error: code = NotFound desc = could not find container \"061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677\": container with ID starting with 061a5696333dafbea49066f85e95bfaedc74fb3742cfcbd7d80066200fb64677 not found: ID does not exist" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.713693 4710 scope.go:117] "RemoveContainer" containerID="0ce0abb225786b6a0b3d7cdf18878af5036bd6820e250e6f8b7f9c1dba91ed91" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.728704 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh6j"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.751884 4710 scope.go:117] "RemoveContainer" containerID="7fdb0e4744f023bd61e303dbc3693a2df176c61e452cc19f086d59491266ccf2" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.774306 4710 scope.go:117] "RemoveContainer" containerID="c8f03aa3ef4a910c622ea58c8ebefb4c3e1dfb4a61efd9a5d82cd36d74aa7ca5" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.806532 4710 scope.go:117] "RemoveContainer" containerID="4a086ae99ebc1882a57d41cbe468d30ec1abd952c24a6e01209b3ec3e3aef0df" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.823872 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kn962" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.828206 4710 scope.go:117] "RemoveContainer" containerID="7cc6edc017f0e75f211c71d61683ddfa11ce70030897d05ba34622ff88927434" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.833402 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-89trk"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.837129 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-89trk"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.844922 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh6j"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.848160 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmh6j"] Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.853271 4710 scope.go:117] "RemoveContainer" containerID="22496dc2a13555c0cf665df9540e38f6ec94713bc4c34cb62fe9be73b05beb9b" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.873200 4710 scope.go:117] "RemoveContainer" containerID="2392ae82df261c6e9d1bd549afa68ed1f7267b5ce24a92f827bfd3aed6c64958" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.942315 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-utilities\") pod \"1f92a242-f0d2-495e-a018-1888abeedda2\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.942454 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-catalog-content\") pod \"1f92a242-f0d2-495e-a018-1888abeedda2\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.942509 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v99lj\" (UniqueName: \"kubernetes.io/projected/1f92a242-f0d2-495e-a018-1888abeedda2-kube-api-access-v99lj\") pod \"1f92a242-f0d2-495e-a018-1888abeedda2\" (UID: \"1f92a242-f0d2-495e-a018-1888abeedda2\") " Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.943272 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-utilities" (OuterVolumeSpecName: "utilities") pod "1f92a242-f0d2-495e-a018-1888abeedda2" (UID: "1f92a242-f0d2-495e-a018-1888abeedda2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:40 crc kubenswrapper[4710]: I1128 17:02:40.945702 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f92a242-f0d2-495e-a018-1888abeedda2-kube-api-access-v99lj" (OuterVolumeSpecName: "kube-api-access-v99lj") pod "1f92a242-f0d2-495e-a018-1888abeedda2" (UID: "1f92a242-f0d2-495e-a018-1888abeedda2"). InnerVolumeSpecName "kube-api-access-v99lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.002490 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f92a242-f0d2-495e-a018-1888abeedda2" (UID: "1f92a242-f0d2-495e-a018-1888abeedda2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.044179 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.044214 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v99lj\" (UniqueName: \"kubernetes.io/projected/1f92a242-f0d2-495e-a018-1888abeedda2-kube-api-access-v99lj\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.044227 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f92a242-f0d2-495e-a018-1888abeedda2-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.148676 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" path="/var/lib/kubelet/pods/013bd749-c6a7-42af-9bf4-96a35c5fc718/volumes" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.149554 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" path="/var/lib/kubelet/pods/60f78884-95af-4b4f-bc63-66d8c883f9dc/volumes" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.150463 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" path="/var/lib/kubelet/pods/93f56c4d-2217-41d4-82dc-aef9c5b5096e/volumes" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.151732 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" path="/var/lib/kubelet/pods/9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63/volumes" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.152564 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" path="/var/lib/kubelet/pods/b69c848e-e4d1-45f3-8bd2-362ffbc93130/volumes" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.154134 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b967853a-325f-468f-8198-56df77075edf" path="/var/lib/kubelet/pods/b967853a-325f-468f-8198-56df77075edf/volumes" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.154958 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" path="/var/lib/kubelet/pods/e663d5c3-28d1-41de-bc55-18a61513b493/volumes" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.523520 4710 generic.go:334] "Generic (PLEG): container finished" podID="1f92a242-f0d2-495e-a018-1888abeedda2" containerID="e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae" exitCode=0 Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.523592 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kn962" event={"ID":"1f92a242-f0d2-495e-a018-1888abeedda2","Type":"ContainerDied","Data":"e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae"} Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.523602 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kn962" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.523628 4710 scope.go:117] "RemoveContainer" containerID="e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.523617 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kn962" event={"ID":"1f92a242-f0d2-495e-a018-1888abeedda2","Type":"ContainerDied","Data":"ed53bc3511075a132cbc7981444d4e57ca8c9de371f90424b4e54a9415d24acc"} Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.544470 4710 scope.go:117] "RemoveContainer" containerID="b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.546412 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kn962"] Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.551568 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kn962"] Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.586098 4710 scope.go:117] "RemoveContainer" containerID="6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.601722 4710 scope.go:117] "RemoveContainer" containerID="e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.602291 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae\": container with ID starting with e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae not found: ID does not exist" containerID="e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.602319 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae"} err="failed to get container status \"e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae\": rpc error: code = NotFound desc = could not find container \"e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae\": container with ID starting with e327554c424391d4771df5e9192eb1340a674817a39f62759fccadb324c0e2ae not found: ID does not exist" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.602339 4710 scope.go:117] "RemoveContainer" containerID="b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.602665 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7\": container with ID starting with b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7 not found: ID does not exist" containerID="b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.602691 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7"} err="failed to get container status \"b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7\": rpc error: code = NotFound desc = could not find container \"b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7\": container with ID starting with b38abed9cd3fd82be9fae0fd17d5fd588a012bfe3895adc909ce5fae5d0bc9b7 not found: ID does not exist" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.602707 4710 scope.go:117] "RemoveContainer" containerID="6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.602963 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f\": container with ID starting with 6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f not found: ID does not exist" containerID="6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.602988 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f"} err="failed to get container status \"6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f\": rpc error: code = NotFound desc = could not find container \"6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f\": container with ID starting with 6189c4f9324c72423ccf50e11ed7f9f8672f63f268f4e40b179c854ced85ee3f not found: ID does not exist" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738592 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z8fvm"] Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738790 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738800 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738810 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738817 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738827 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738834 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738840 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738846 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738852 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738858 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738865 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738871 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738879 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738885 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738892 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738899 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738907 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738913 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738921 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" containerName="marketplace-operator" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738928 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" containerName="marketplace-operator" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738935 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738941 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738950 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738955 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738965 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738970 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738975 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738983 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.738990 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.738997 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.739008 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739013 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.739020 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739026 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.739033 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739039 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.739048 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739053 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.739166 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739176 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.739184 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739190 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerName="extract-utilities" Nov 28 17:02:41 crc kubenswrapper[4710]: E1128 17:02:41.739197 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739202 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" containerName="extract-content" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739288 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="b967853a-325f-468f-8198-56df77075edf" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739299 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="60f78884-95af-4b4f-bc63-66d8c883f9dc" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739308 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739315 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e663d5c3-28d1-41de-bc55-18a61513b493" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739322 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2c1123-1a92-4fc3-ae1b-f1472aaf2e63" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739329 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="013bd749-c6a7-42af-9bf4-96a35c5fc718" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739338 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="b69c848e-e4d1-45f3-8bd2-362ffbc93130" containerName="registry-server" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.739346 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="93f56c4d-2217-41d4-82dc-aef9c5b5096e" containerName="marketplace-operator" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.740618 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.743731 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.747829 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8fvm"] Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.754215 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-catalog-content\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.754297 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-utilities\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.754784 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq5f9\" (UniqueName: \"kubernetes.io/projected/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-kube-api-access-gq5f9\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.855400 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-utilities\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.855553 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq5f9\" (UniqueName: \"kubernetes.io/projected/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-kube-api-access-gq5f9\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.855646 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-catalog-content\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.856399 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-utilities\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.856485 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-catalog-content\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:41 crc kubenswrapper[4710]: I1128 17:02:41.874452 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq5f9\" (UniqueName: \"kubernetes.io/projected/69fc5b9f-c1de-4e0f-9f04-1a9db62f2814-kube-api-access-gq5f9\") pod \"redhat-operators-z8fvm\" (UID: \"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814\") " pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:42 crc kubenswrapper[4710]: I1128 17:02:42.063048 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:42 crc kubenswrapper[4710]: I1128 17:02:42.250855 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z8fvm"] Nov 28 17:02:42 crc kubenswrapper[4710]: W1128 17:02:42.259241 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69fc5b9f_c1de_4e0f_9f04_1a9db62f2814.slice/crio-f712d8f7d36568ed9a77a041ebbc3ae4201b825969d8009ebe0e7d93459ea84f WatchSource:0}: Error finding container f712d8f7d36568ed9a77a041ebbc3ae4201b825969d8009ebe0e7d93459ea84f: Status 404 returned error can't find the container with id f712d8f7d36568ed9a77a041ebbc3ae4201b825969d8009ebe0e7d93459ea84f Nov 28 17:02:42 crc kubenswrapper[4710]: I1128 17:02:42.539795 4710 generic.go:334] "Generic (PLEG): container finished" podID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" containerID="9a193345906ee9289fb0f508cff6e087d58de0adf9c4d3173a10a9bdf9e5c728" exitCode=0 Nov 28 17:02:42 crc kubenswrapper[4710]: I1128 17:02:42.539909 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8fvm" event={"ID":"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814","Type":"ContainerDied","Data":"9a193345906ee9289fb0f508cff6e087d58de0adf9c4d3173a10a9bdf9e5c728"} Nov 28 17:02:42 crc kubenswrapper[4710]: I1128 17:02:42.540011 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8fvm" event={"ID":"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814","Type":"ContainerStarted","Data":"f712d8f7d36568ed9a77a041ebbc3ae4201b825969d8009ebe0e7d93459ea84f"} Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.138888 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c4w9z"] Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.140264 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.143081 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.169146 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f92a242-f0d2-495e-a018-1888abeedda2" path="/var/lib/kubelet/pods/1f92a242-f0d2-495e-a018-1888abeedda2/volumes" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.169679 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4w9z"] Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.278417 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8s92\" (UniqueName: \"kubernetes.io/projected/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-kube-api-access-r8s92\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.278614 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-utilities\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.278895 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-catalog-content\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.381026 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-catalog-content\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.381152 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8s92\" (UniqueName: \"kubernetes.io/projected/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-kube-api-access-r8s92\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.381204 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-utilities\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.382018 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-utilities\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.382166 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-catalog-content\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.405431 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8s92\" (UniqueName: \"kubernetes.io/projected/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-kube-api-access-r8s92\") pod \"certified-operators-c4w9z\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.473963 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.565233 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8fvm" event={"ID":"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814","Type":"ContainerStarted","Data":"6f57970625bb3311f0ac981972ee1b5b23db0c6782e99250f2a24f1ffc4d3086"} Nov 28 17:02:43 crc kubenswrapper[4710]: I1128 17:02:43.669192 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c4w9z"] Nov 28 17:02:43 crc kubenswrapper[4710]: W1128 17:02:43.678953 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89df42e9_55bb_4ac9_b1b9_57f42b7e62c0.slice/crio-be596a2555a618ad082d84fee414227d50be4060901c231bf06cb44e826a3499 WatchSource:0}: Error finding container be596a2555a618ad082d84fee414227d50be4060901c231bf06cb44e826a3499: Status 404 returned error can't find the container with id be596a2555a618ad082d84fee414227d50be4060901c231bf06cb44e826a3499 Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.137536 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6wpc2"] Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.138706 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.140865 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.153358 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6wpc2"] Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.293139 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-utilities\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.293342 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x425\" (UniqueName: \"kubernetes.io/projected/070ba80e-9b6b-4149-b0ac-a95183059050-kube-api-access-6x425\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.293364 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-catalog-content\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.394419 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-utilities\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.394546 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x425\" (UniqueName: \"kubernetes.io/projected/070ba80e-9b6b-4149-b0ac-a95183059050-kube-api-access-6x425\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.394572 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-catalog-content\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.395090 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-utilities\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.395120 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-catalog-content\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.413138 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x425\" (UniqueName: \"kubernetes.io/projected/070ba80e-9b6b-4149-b0ac-a95183059050-kube-api-access-6x425\") pod \"community-operators-6wpc2\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.467599 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.597475 4710 generic.go:334] "Generic (PLEG): container finished" podID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" containerID="6f57970625bb3311f0ac981972ee1b5b23db0c6782e99250f2a24f1ffc4d3086" exitCode=0 Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.597584 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8fvm" event={"ID":"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814","Type":"ContainerDied","Data":"6f57970625bb3311f0ac981972ee1b5b23db0c6782e99250f2a24f1ffc4d3086"} Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.600855 4710 generic.go:334] "Generic (PLEG): container finished" podID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerID="cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6" exitCode=0 Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.600891 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4w9z" event={"ID":"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0","Type":"ContainerDied","Data":"cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6"} Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.600918 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4w9z" event={"ID":"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0","Type":"ContainerStarted","Data":"be596a2555a618ad082d84fee414227d50be4060901c231bf06cb44e826a3499"} Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.708971 4710 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709543 4710 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709573 4710 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.709668 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709684 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.709692 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709699 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.709706 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709711 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.709719 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709724 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.709733 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709739 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.709747 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709755 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.709915 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709922 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710000 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710009 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710017 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710025 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710034 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.709894 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710169 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac" gracePeriod=15 Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710332 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1" gracePeriod=15 Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710401 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f" gracePeriod=15 Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710448 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325" gracePeriod=15 Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.710493 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a" gracePeriod=15 Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.711051 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.712940 4710 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.744514 4710 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.205:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.800864 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.800934 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.800976 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.801034 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.801059 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.801100 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.801114 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.801162 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.902865 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.902954 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.902975 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.902968 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903046 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903109 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903130 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903141 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903165 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903190 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903193 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903213 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903239 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903263 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903276 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: I1128 17:02:44.903290 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.996475 4710 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 28 17:02:44 crc kubenswrapper[4710]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57" Netns:"/var/run/netns/7de64a3f-df22-47c8-80b5-5018fa75f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:44 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 17:02:44 crc kubenswrapper[4710]: > Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.996728 4710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 28 17:02:44 crc kubenswrapper[4710]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57" Netns:"/var/run/netns/7de64a3f-df22-47c8-80b5-5018fa75f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:44 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 17:02:44 crc kubenswrapper[4710]: > pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.996746 4710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 28 17:02:44 crc kubenswrapper[4710]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57" Netns:"/var/run/netns/7de64a3f-df22-47c8-80b5-5018fa75f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:44 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 17:02:44 crc kubenswrapper[4710]: > pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.996830 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-6wpc2_openshift-marketplace(070ba80e-9b6b-4149-b0ac-a95183059050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-6wpc2_openshift-marketplace(070ba80e-9b6b-4149-b0ac-a95183059050)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57\\\" Netns:\\\"/var/run/netns/7de64a3f-df22-47c8-80b5-5018fa75f129\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s\\\": dial tcp 38.129.56.205:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/community-operators-6wpc2" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" Nov 28 17:02:44 crc kubenswrapper[4710]: E1128 17:02:44.997274 4710 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.205:6443: connect: connection refused" event=< Nov 28 17:02:44 crc kubenswrapper[4710]: &Event{ObjectMeta:{community-operators-6wpc2.187c3a664851b86b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-6wpc2,UID:070ba80e-9b6b-4149-b0ac-a95183059050,APIVersion:v1,ResourceVersion:29580,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57" Netns:"/var/run/netns/7de64a3f-df22-47c8-80b5-5018fa75f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:44 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 17:02:44.996782187 +0000 UTC m=+254.255082252,LastTimestamp:2025-11-28 17:02:44.996782187 +0000 UTC m=+254.255082252,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Nov 28 17:02:44 crc kubenswrapper[4710]: > Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.045643 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.606718 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"430600c5b0625ab3293846277a3abf049b0f9876691d1507bc3021d66562d178"} Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.608228 4710 generic.go:334] "Generic (PLEG): container finished" podID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" containerID="174f806b3b483309150188b14910e517268bd2cf06bc53cf4033b824d45a0543" exitCode=0 Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.608290 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ca1472a-cb3f-49dd-bc30-ab277096f0e0","Type":"ContainerDied","Data":"174f806b3b483309150188b14910e517268bd2cf06bc53cf4033b824d45a0543"} Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.608855 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.613240 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.614674 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.615746 4710 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1" exitCode=0 Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.615795 4710 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f" exitCode=0 Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.615806 4710 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325" exitCode=0 Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.615813 4710 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a" exitCode=2 Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.615865 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.616239 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:45 crc kubenswrapper[4710]: I1128 17:02:45.616406 4710 scope.go:117] "RemoveContainer" containerID="6f416f0990ef5488e7b3824cc8bf64e5f029b1333e249d0dc3e2a242214e9528" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.176327 4710 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 28 17:02:46 crc kubenswrapper[4710]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7" Netns:"/var/run/netns/18065d45-2ddd-4ad9-a308-7c82e960b87e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:46 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 17:02:46 crc kubenswrapper[4710]: > Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.176806 4710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 28 17:02:46 crc kubenswrapper[4710]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7" Netns:"/var/run/netns/18065d45-2ddd-4ad9-a308-7c82e960b87e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:46 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 17:02:46 crc kubenswrapper[4710]: > pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.176831 4710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 28 17:02:46 crc kubenswrapper[4710]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7" Netns:"/var/run/netns/18065d45-2ddd-4ad9-a308-7c82e960b87e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:46 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 28 17:02:46 crc kubenswrapper[4710]: > pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.176891 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-6wpc2_openshift-marketplace(070ba80e-9b6b-4149-b0ac-a95183059050)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-6wpc2_openshift-marketplace(070ba80e-9b6b-4149-b0ac-a95183059050)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7\\\" Netns:\\\"/var/run/netns/18065d45-2ddd-4ad9-a308-7c82e960b87e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=0f0b2740c268557b0ca67a9e3e93888a0b8103beb3f2c9317720435ff24856a7;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s\\\": dial tcp 38.129.56.205:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/community-operators-6wpc2" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.587095 4710 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.588039 4710 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.588697 4710 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.589068 4710 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.589333 4710 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.589497 4710 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.589899 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="200ms" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.624231 4710 generic.go:334] "Generic (PLEG): container finished" podID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerID="7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5" exitCode=0 Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.624317 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4w9z" event={"ID":"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0","Type":"ContainerDied","Data":"7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5"} Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.624809 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.625947 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.631335 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c450d131c74cdf7635ade48899c11000a16bebd6468c4619933413cabf7a4608"} Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.632100 4710 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.205:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.632439 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.632741 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.634805 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.638893 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z8fvm" event={"ID":"69fc5b9f-c1de-4e0f-9f04-1a9db62f2814","Type":"ContainerStarted","Data":"f532131441d51dcef4d481ac02095c139bfa1f4597ec26b6e9b17ccb4c93ec24"} Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.639022 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.639535 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.640128 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.790718 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="400ms" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.843626 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.844903 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.845623 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.846269 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: I1128 17:02:46.847092 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.942275 4710 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.205:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" volumeName="registry-storage" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.987732 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:02:46Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:02:46Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:02:46Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:02:46Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:20434c856c20158a4c73986bf7de93188afa338ed356d293a59f9e621072cfc3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24f7dab5f4a6fcbb16d41b8a7345f9f9bae2ef1e2c53abed71c4f18eeafebc85\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1ab7704f67839bb3705d0c80bea6f7197f233d472860c3005433c90d7786dd54\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9c13035c7ccf9d13a21c9219d8d0d462fa2fdb4fe128d9724443784b1ed9a318\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1205801806},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:485eae41e5a1129e031da03a9bc899702d16da22589d58a8e0c2910bc0226a23\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86681c5c7f102911ba70f243ae7524f9a76939abbb50c93b1c80b70e07ccba62\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1195438934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.988370 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.988694 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.989024 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.989432 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:46 crc kubenswrapper[4710]: E1128 17:02:46.989451 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.009497 4710 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.205:6443: connect: connection refused" event=< Nov 28 17:02:47 crc kubenswrapper[4710]: &Event{ObjectMeta:{community-operators-6wpc2.187c3a664851b86b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-6wpc2,UID:070ba80e-9b6b-4149-b0ac-a95183059050,APIVersion:v1,ResourceVersion:29580,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57" Netns:"/var/run/netns/7de64a3f-df22-47c8-80b5-5018fa75f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:47 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 17:02:44.996782187 +0000 UTC m=+254.255082252,LastTimestamp:2025-11-28 17:02:44.996782187 +0000 UTC m=+254.255082252,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Nov 28 17:02:47 crc kubenswrapper[4710]: > Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.013091 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.014041 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.014517 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.015066 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.015374 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.136662 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kube-api-access\") pod \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.136805 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-var-lock\") pod \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.136906 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kubelet-dir\") pod \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\" (UID: \"2ca1472a-cb3f-49dd-bc30-ab277096f0e0\") " Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.137321 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2ca1472a-cb3f-49dd-bc30-ab277096f0e0" (UID: "2ca1472a-cb3f-49dd-bc30-ab277096f0e0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.139884 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-var-lock" (OuterVolumeSpecName: "var-lock") pod "2ca1472a-cb3f-49dd-bc30-ab277096f0e0" (UID: "2ca1472a-cb3f-49dd-bc30-ab277096f0e0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.144683 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2ca1472a-cb3f-49dd-bc30-ab277096f0e0" (UID: "2ca1472a-cb3f-49dd-bc30-ab277096f0e0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.192362 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="800ms" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.238339 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.238365 4710 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.238374 4710 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ca1472a-cb3f-49dd-bc30-ab277096f0e0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.464917 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.466238 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.466916 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.467422 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.467908 4710 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.468197 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.468461 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643423 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643511 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643543 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643614 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643646 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643728 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643840 4710 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643857 4710 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.643868 4710 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.645590 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.646417 4710 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac" exitCode=0 Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.646509 4710 scope.go:117] "RemoveContainer" containerID="e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.646478 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.649282 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4w9z" event={"ID":"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0","Type":"ContainerStarted","Data":"fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2"} Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.649735 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.650324 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.650646 4710 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.650990 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.651239 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.654129 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.654689 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"2ca1472a-cb3f-49dd-bc30-ab277096f0e0","Type":"ContainerDied","Data":"2bd08c004c2d994be1ea9e35e5f7cd7fbe844fac6047e6ada6b0f882bc2e3cb4"} Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.654716 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bd08c004c2d994be1ea9e35e5f7cd7fbe844fac6047e6ada6b0f882bc2e3cb4" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.655353 4710 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.205:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.657352 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.657814 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.658002 4710 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.658164 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.658323 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.663813 4710 scope.go:117] "RemoveContainer" containerID="1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.665853 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.666220 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.666479 4710 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.666688 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.666975 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.682068 4710 scope.go:117] "RemoveContainer" containerID="e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.696027 4710 scope.go:117] "RemoveContainer" containerID="f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.711257 4710 scope.go:117] "RemoveContainer" containerID="f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.728489 4710 scope.go:117] "RemoveContainer" containerID="69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.755416 4710 scope.go:117] "RemoveContainer" containerID="e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.759367 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\": container with ID starting with e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1 not found: ID does not exist" containerID="e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.759420 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1"} err="failed to get container status \"e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\": rpc error: code = NotFound desc = could not find container \"e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1\": container with ID starting with e7e7f74989cfbe45e8729f5ab3fc28f8ff746d00cf73cd76e66286f5e3dcffc1 not found: ID does not exist" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.759461 4710 scope.go:117] "RemoveContainer" containerID="1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.760066 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\": container with ID starting with 1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f not found: ID does not exist" containerID="1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.760111 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f"} err="failed to get container status \"1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\": rpc error: code = NotFound desc = could not find container \"1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f\": container with ID starting with 1680e8ce8d056f414f73fddbfed76f70d250447f407bbbe88a76300e7d09518f not found: ID does not exist" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.760167 4710 scope.go:117] "RemoveContainer" containerID="e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.760492 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\": container with ID starting with e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325 not found: ID does not exist" containerID="e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.760517 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325"} err="failed to get container status \"e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\": rpc error: code = NotFound desc = could not find container \"e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325\": container with ID starting with e0263cbb54573b88a7d68f30d9970cde5bfe7984407e1c84e889eba484f48325 not found: ID does not exist" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.760534 4710 scope.go:117] "RemoveContainer" containerID="f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.760899 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\": container with ID starting with f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a not found: ID does not exist" containerID="f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.760939 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a"} err="failed to get container status \"f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\": rpc error: code = NotFound desc = could not find container \"f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a\": container with ID starting with f330560998dfcaba182636dde74ea49e15475c2389a8ee568deaae72ed943c2a not found: ID does not exist" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.760989 4710 scope.go:117] "RemoveContainer" containerID="f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.761283 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\": container with ID starting with f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac not found: ID does not exist" containerID="f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.761315 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac"} err="failed to get container status \"f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\": rpc error: code = NotFound desc = could not find container \"f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac\": container with ID starting with f0c4f64be3e27662503feaf5ef1d3e1c65049af3bb4aeba1e672d2d86a727eac not found: ID does not exist" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.761336 4710 scope.go:117] "RemoveContainer" containerID="69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.761601 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\": container with ID starting with 69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f not found: ID does not exist" containerID="69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f" Nov 28 17:02:47 crc kubenswrapper[4710]: I1128 17:02:47.761650 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f"} err="failed to get container status \"69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\": rpc error: code = NotFound desc = could not find container \"69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f\": container with ID starting with 69186456d581e3d3740729eabdfccf7a037993e933f68c86ddbf0ebe856a648f not found: ID does not exist" Nov 28 17:02:47 crc kubenswrapper[4710]: E1128 17:02:47.993519 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="1.6s" Nov 28 17:02:49 crc kubenswrapper[4710]: I1128 17:02:49.149109 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 28 17:02:49 crc kubenswrapper[4710]: E1128 17:02:49.594221 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="3.2s" Nov 28 17:02:51 crc kubenswrapper[4710]: I1128 17:02:51.144573 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[4710]: I1128 17:02:51.146748 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[4710]: I1128 17:02:51.147234 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:51 crc kubenswrapper[4710]: I1128 17:02:51.147558 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.063657 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.066101 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.113683 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.114767 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.115380 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.115578 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.115716 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.719548 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z8fvm" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.720646 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.721090 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.721445 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: I1128 17:02:52.721744 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:52 crc kubenswrapper[4710]: E1128 17:02:52.794979 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="6.4s" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.475312 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.475372 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.531834 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.532273 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.532568 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.532939 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.533405 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.730995 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.732256 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.732455 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.732634 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:53 crc kubenswrapper[4710]: I1128 17:02:53.732845 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: E1128 17:02:57.010966 4710 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.129.56.205:6443: connect: connection refused" event=< Nov 28 17:02:57 crc kubenswrapper[4710]: &Event{ObjectMeta:{community-operators-6wpc2.187c3a664851b86b openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-6wpc2,UID:070ba80e-9b6b-4149-b0ac-a95183059050,APIVersion:v1,ResourceVersion:29580,FieldPath:,},Reason:FailedCreatePodSandBox,Message:Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-6wpc2_openshift-marketplace_070ba80e-9b6b-4149-b0ac-a95183059050_0(79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57): error adding pod openshift-marketplace_community-operators-6wpc2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57" Netns:"/var/run/netns/7de64a3f-df22-47c8-80b5-5018fa75f129" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-6wpc2;K8S_POD_INFRA_CONTAINER_ID=79462e97834a84896d5020946c2a7fe3675f938e091c3792e24aa04b01acde57;K8S_POD_UID=070ba80e-9b6b-4149-b0ac-a95183059050" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-6wpc2] networking: Multus: [openshift-marketplace/community-operators-6wpc2/070ba80e-9b6b-4149-b0ac-a95183059050]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-6wpc2 in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-6wpc2 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6wpc2?timeout=1m0s": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:57 crc kubenswrapper[4710]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"},Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-28 17:02:44.996782187 +0000 UTC m=+254.255082252,LastTimestamp:2025-11-28 17:02:44.996782187 +0000 UTC m=+254.255082252,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Nov 28 17:02:57 crc kubenswrapper[4710]: > Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.140830 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.142141 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.142539 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.142868 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.143084 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.162857 4710 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.162905 4710 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:02:57 crc kubenswrapper[4710]: E1128 17:02:57.163577 4710 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.164443 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:57 crc kubenswrapper[4710]: E1128 17:02:57.182981 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:02:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:02:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:02:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-28T17:02:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:20434c856c20158a4c73986bf7de93188afa338ed356d293a59f9e621072cfc3\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:24f7dab5f4a6fcbb16d41b8a7345f9f9bae2ef1e2c53abed71c4f18eeafebc85\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1605131077},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1ab7704f67839bb3705d0c80bea6f7197f233d472860c3005433c90d7786dd54\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9c13035c7ccf9d13a21c9219d8d0d462fa2fdb4fe128d9724443784b1ed9a318\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1205801806},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:485eae41e5a1129e031da03a9bc899702d16da22589d58a8e0c2910bc0226a23\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:86681c5c7f102911ba70f243ae7524f9a76939abbb50c93b1c80b70e07ccba62\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1195438934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e8990432556acad31519b1a73ec32f32d27c2034cf9e5cc4db8980efc7331594\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:ebe9f523f5c211a3a0f2570331dddcd5be15b12c1fecd9b8b121f881bfaad029\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1129027903},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: E1128 17:02:57.183442 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: E1128 17:02:57.183688 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: E1128 17:02:57.183916 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: E1128 17:02:57.184068 4710 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:57 crc kubenswrapper[4710]: E1128 17:02:57.184086 4710 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 28 17:02:57 crc kubenswrapper[4710]: I1128 17:02:57.707415 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c49f9b01e641b752e725698a1c7732435ad180d9c9c6199e73cafd523dff493"} Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.105974 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.106035 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.106072 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.106107 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:02:59 crc kubenswrapper[4710]: W1128 17:02:59.106700 4710 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27206": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:59 crc kubenswrapper[4710]: E1128 17:02:59.106795 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27206\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 17:02:59 crc kubenswrapper[4710]: W1128 17:02:59.106732 4710 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27207": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:59 crc kubenswrapper[4710]: W1128 17:02:59.106817 4710 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27206": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:02:59 crc kubenswrapper[4710]: E1128 17:02:59.106864 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27207\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 17:02:59 crc kubenswrapper[4710]: E1128 17:02:59.107072 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27206\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 17:02:59 crc kubenswrapper[4710]: E1128 17:02:59.196497 4710 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.205:6443: connect: connection refused" interval="7s" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.528232 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" podUID="5a82d2d7-4966-4dff-b1bf-5995aedd9fae" containerName="oauth-openshift" containerID="cri-o://40e260f6329a2be33482c379bfcc8fb61d36893ee653405702e8503da6a9f658" gracePeriod=15 Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.720092 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.720158 4710 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b" exitCode=1 Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.720225 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b"} Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.720563 4710 scope.go:117] "RemoveContainer" containerID="ba634f8497e8d49092745f1494e974a23de5c25234c5651ed7c4748a7266ee2b" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.721324 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.721870 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.721934 4710 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="6f851f51ac8b6dc6b3e15cedfacb3202e8e66ccdb0b0ef3a342ffaeeff7650b6" exitCode=0 Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.721986 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"6f851f51ac8b6dc6b3e15cedfacb3202e8e66ccdb0b0ef3a342ffaeeff7650b6"} Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.722119 4710 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.722132 4710 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.722396 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: E1128 17:02:59.722501 4710 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.722962 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.723544 4710 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.723951 4710 generic.go:334] "Generic (PLEG): container finished" podID="5a82d2d7-4966-4dff-b1bf-5995aedd9fae" containerID="40e260f6329a2be33482c379bfcc8fb61d36893ee653405702e8503da6a9f658" exitCode=0 Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.723981 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" event={"ID":"5a82d2d7-4966-4dff-b1bf-5995aedd9fae","Type":"ContainerDied","Data":"40e260f6329a2be33482c379bfcc8fb61d36893ee653405702e8503da6a9f658"} Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.724182 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.724752 4710 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.725138 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.725864 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.726411 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.981690 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.982348 4710 status_manager.go:851] "Failed to get status for pod" podUID="69fc5b9f-c1de-4e0f-9f04-1a9db62f2814" pod="openshift-marketplace/redhat-operators-z8fvm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-z8fvm\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.982679 4710 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.983046 4710 status_manager.go:851] "Failed to get status for pod" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" pod="openshift-marketplace/certified-operators-c4w9z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c4w9z\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.983266 4710 status_manager.go:851] "Failed to get status for pod" podUID="606e7810-91c6-46a0-9a31-67713c3cfe5e" pod="openshift-image-registry/image-registry-66df7c8f76-c9rg6" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-66df7c8f76-c9rg6\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.983455 4710 status_manager.go:851] "Failed to get status for pod" podUID="5a82d2d7-4966-4dff-b1bf-5995aedd9fae" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-v7m54\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:02:59 crc kubenswrapper[4710]: I1128 17:02:59.983643 4710 status_manager.go:851] "Failed to get status for pod" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.205:6443: connect: connection refused" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.019936 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-provider-selection\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020176 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-idp-0-file-data\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020196 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-trusted-ca-bundle\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020219 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-session\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020242 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-serving-cert\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020262 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-ocp-branding-template\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020288 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-router-certs\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020305 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-login\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020327 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-service-ca\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020346 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-dir\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020369 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh2g4\" (UniqueName: \"kubernetes.io/projected/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-kube-api-access-bh2g4\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020403 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-cliconfig\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020435 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-error\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.020459 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-policies\") pod \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\" (UID: \"5a82d2d7-4966-4dff-b1bf-5995aedd9fae\") " Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.021354 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.021446 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.021555 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.021618 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.021236 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.024860 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-kube-api-access-bh2g4" (OuterVolumeSpecName: "kube-api-access-bh2g4") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "kube-api-access-bh2g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.024908 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.025162 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.025234 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.025314 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.025428 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.025506 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.025643 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.026461 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5a82d2d7-4966-4dff-b1bf-5995aedd9fae" (UID: "5a82d2d7-4966-4dff-b1bf-5995aedd9fae"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.106306 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.106356 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.106418 4710 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.106456 4710 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.106611 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 17:05:02.106586624 +0000 UTC m=+391.364886689 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.106696 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-28 17:05:02.106672677 +0000 UTC m=+391.364972792 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:00 crc kubenswrapper[4710]: W1128 17:03:00.107080 4710 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27206": dial tcp 38.129.56.205:6443: connect: connection refused Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.107214 4710 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27206\": dial tcp 38.129.56.205:6443: connect: connection refused" logger="UnhandledError" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.121643 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.121691 4710 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.121710 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.121725 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.121738 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.121751 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.121974 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.121995 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.122009 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.122024 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.122034 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.122046 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh2g4\" (UniqueName: \"kubernetes.io/projected/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-kube-api-access-bh2g4\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.122058 4710 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.122068 4710 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5a82d2d7-4966-4dff-b1bf-5995aedd9fae-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.166341 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.189837 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 28 17:03:00 crc kubenswrapper[4710]: E1128 17:03:00.196989 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.743848 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d2f167b77376d7ea95f4f25d7f6e90b02424363c9bfdcb9800dcbbb463784a84"} Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.743894 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a78c188ae3d1d033ce1cf4c6613ce2849147a523ea0327e47d4a6e576378838f"} Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.743905 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"26d3b2a0db053f56c1a119a34098f1e2d69757fe48d3a34009dd4db7412922f1"} Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.749984 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.750058 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"18cf85399b714d91d14d65b604fbad8e0151cbf7ab527cc3618b399b3f36d949"} Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.751950 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" event={"ID":"5a82d2d7-4966-4dff-b1bf-5995aedd9fae","Type":"ContainerDied","Data":"5a3f8eb724e64786a9c08c75847247f8fe5afe7542a7cc0c8d9255c2f527a9a2"} Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.751990 4710 scope.go:117] "RemoveContainer" containerID="40e260f6329a2be33482c379bfcc8fb61d36893ee653405702e8503da6a9f658" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.752014 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-v7m54" Nov 28 17:03:00 crc kubenswrapper[4710]: I1128 17:03:00.957987 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:03:01 crc kubenswrapper[4710]: E1128 17:03:01.106467 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:01 crc kubenswrapper[4710]: E1128 17:03:01.106505 4710 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:01 crc kubenswrapper[4710]: E1128 17:03:01.106503 4710 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:01 crc kubenswrapper[4710]: E1128 17:03:01.106539 4710 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:01 crc kubenswrapper[4710]: E1128 17:03:01.106587 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-28 17:05:03.106564909 +0000 UTC m=+392.364864954 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:01 crc kubenswrapper[4710]: E1128 17:03:01.106606 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-28 17:05:03.10659911 +0000 UTC m=+392.364899155 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Nov 28 17:03:01 crc kubenswrapper[4710]: I1128 17:03:01.141413 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:03:01 crc kubenswrapper[4710]: I1128 17:03:01.142008 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:03:01 crc kubenswrapper[4710]: I1128 17:03:01.761162 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a67a8af2bee5a41ecd976f0027abcc5c1f9ddff1849e5a7ec0b0cf0ee4e20535"} Nov 28 17:03:01 crc kubenswrapper[4710]: I1128 17:03:01.761485 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1bbd5e24e08a5383ac306d311bd0b8baefa5b6c2b44860b1e230280300c34a90"} Nov 28 17:03:01 crc kubenswrapper[4710]: I1128 17:03:01.761666 4710 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:01 crc kubenswrapper[4710]: I1128 17:03:01.761695 4710 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:02 crc kubenswrapper[4710]: I1128 17:03:02.165032 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:02 crc kubenswrapper[4710]: I1128 17:03:02.165407 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:02 crc kubenswrapper[4710]: I1128 17:03:02.171836 4710 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]log ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]etcd ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/generic-apiserver-start-informers ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/priority-and-fairness-filter ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-apiextensions-informers ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-apiextensions-controllers ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/crd-informer-synced ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-system-namespaces-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 28 17:03:02 crc kubenswrapper[4710]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 28 17:03:02 crc kubenswrapper[4710]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/bootstrap-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/start-kube-aggregator-informers ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/apiservice-registration-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/apiservice-discovery-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]autoregister-completion ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/apiservice-openapi-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 28 17:03:02 crc kubenswrapper[4710]: livez check failed Nov 28 17:03:02 crc kubenswrapper[4710]: I1128 17:03:02.173460 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 28 17:03:02 crc kubenswrapper[4710]: I1128 17:03:02.769915 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 28 17:03:03 crc kubenswrapper[4710]: I1128 17:03:03.612422 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:03:03 crc kubenswrapper[4710]: I1128 17:03:03.618228 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:03:06 crc kubenswrapper[4710]: I1128 17:03:06.468318 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 28 17:03:06 crc kubenswrapper[4710]: I1128 17:03:06.483570 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 28 17:03:06 crc kubenswrapper[4710]: I1128 17:03:06.484720 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 28 17:03:06 crc kubenswrapper[4710]: W1128 17:03:06.527359 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070ba80e_9b6b_4149_b0ac_a95183059050.slice/crio-f6027bfb1cb12f10e3d7af318067c6779308709dbaaa9af9c215825db6a90384 WatchSource:0}: Error finding container f6027bfb1cb12f10e3d7af318067c6779308709dbaaa9af9c215825db6a90384: Status 404 returned error can't find the container with id f6027bfb1cb12f10e3d7af318067c6779308709dbaaa9af9c215825db6a90384 Nov 28 17:03:06 crc kubenswrapper[4710]: I1128 17:03:06.770836 4710 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:06 crc kubenswrapper[4710]: I1128 17:03:06.788445 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wpc2" event={"ID":"070ba80e-9b6b-4149-b0ac-a95183059050","Type":"ContainerStarted","Data":"f6027bfb1cb12f10e3d7af318067c6779308709dbaaa9af9c215825db6a90384"} Nov 28 17:03:06 crc kubenswrapper[4710]: I1128 17:03:06.788733 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:06 crc kubenswrapper[4710]: I1128 17:03:06.788808 4710 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:06 crc kubenswrapper[4710]: I1128 17:03:06.788837 4710 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:07 crc kubenswrapper[4710]: I1128 17:03:07.168967 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:07 crc kubenswrapper[4710]: I1128 17:03:07.171245 4710 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6db39893-4798-4bf4-86be-e8c8b98cb8dd" Nov 28 17:03:07 crc kubenswrapper[4710]: I1128 17:03:07.795424 4710 generic.go:334] "Generic (PLEG): container finished" podID="070ba80e-9b6b-4149-b0ac-a95183059050" containerID="31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9" exitCode=0 Nov 28 17:03:07 crc kubenswrapper[4710]: I1128 17:03:07.795947 4710 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:07 crc kubenswrapper[4710]: I1128 17:03:07.795978 4710 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:07 crc kubenswrapper[4710]: I1128 17:03:07.796629 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wpc2" event={"ID":"070ba80e-9b6b-4149-b0ac-a95183059050","Type":"ContainerDied","Data":"31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9"} Nov 28 17:03:07 crc kubenswrapper[4710]: I1128 17:03:07.803300 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:08 crc kubenswrapper[4710]: I1128 17:03:08.802449 4710 generic.go:334] "Generic (PLEG): container finished" podID="070ba80e-9b6b-4149-b0ac-a95183059050" containerID="17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419" exitCode=0 Nov 28 17:03:08 crc kubenswrapper[4710]: I1128 17:03:08.802501 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wpc2" event={"ID":"070ba80e-9b6b-4149-b0ac-a95183059050","Type":"ContainerDied","Data":"17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419"} Nov 28 17:03:08 crc kubenswrapper[4710]: I1128 17:03:08.803100 4710 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:08 crc kubenswrapper[4710]: I1128 17:03:08.803115 4710 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:09 crc kubenswrapper[4710]: I1128 17:03:09.809753 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wpc2" event={"ID":"070ba80e-9b6b-4149-b0ac-a95183059050","Type":"ContainerStarted","Data":"8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff"} Nov 28 17:03:09 crc kubenswrapper[4710]: I1128 17:03:09.809816 4710 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:09 crc kubenswrapper[4710]: I1128 17:03:09.810189 4710 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:10 crc kubenswrapper[4710]: I1128 17:03:10.961651 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 28 17:03:11 crc kubenswrapper[4710]: I1128 17:03:11.141162 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:03:11 crc kubenswrapper[4710]: I1128 17:03:11.157798 4710 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6db39893-4798-4bf4-86be-e8c8b98cb8dd" Nov 28 17:03:12 crc kubenswrapper[4710]: I1128 17:03:12.141161 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:03:13 crc kubenswrapper[4710]: I1128 17:03:13.142943 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:03:14 crc kubenswrapper[4710]: I1128 17:03:14.468153 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:03:14 crc kubenswrapper[4710]: I1128 17:03:14.468227 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:03:14 crc kubenswrapper[4710]: I1128 17:03:14.512728 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:03:14 crc kubenswrapper[4710]: I1128 17:03:14.895177 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:03:15 crc kubenswrapper[4710]: I1128 17:03:15.929054 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 28 17:03:16 crc kubenswrapper[4710]: I1128 17:03:16.327192 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 28 17:03:16 crc kubenswrapper[4710]: I1128 17:03:16.470282 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 28 17:03:16 crc kubenswrapper[4710]: I1128 17:03:16.545920 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 28 17:03:17 crc kubenswrapper[4710]: I1128 17:03:17.122385 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 28 17:03:17 crc kubenswrapper[4710]: I1128 17:03:17.790270 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 28 17:03:17 crc kubenswrapper[4710]: I1128 17:03:17.929015 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 28 17:03:18 crc kubenswrapper[4710]: I1128 17:03:18.455296 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 28 17:03:18 crc kubenswrapper[4710]: I1128 17:03:18.700787 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 28 17:03:18 crc kubenswrapper[4710]: I1128 17:03:18.834456 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.033355 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.060455 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.103709 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.138745 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.251468 4710 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.362131 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.404468 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.459070 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.495957 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.518661 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.526595 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.584815 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.651841 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.765541 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.776668 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.785258 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.856123 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 28 17:03:19 crc kubenswrapper[4710]: I1128 17:03:19.951183 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.015620 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.095008 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.209144 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.265058 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.394517 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.423166 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.424949 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.440057 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.472692 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.580013 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.646585 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.648942 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.861719 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.893040 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.925346 4710 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 28 17:03:20 crc kubenswrapper[4710]: I1128 17:03:20.968649 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.160072 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.166985 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.435389 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.445497 4710 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.477416 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.487549 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.528089 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.561092 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.574978 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.593111 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.630950 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.661090 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.945533 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 28 17:03:21 crc kubenswrapper[4710]: I1128 17:03:21.974553 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.020652 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.082823 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.092615 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.109371 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.157659 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.160583 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.220876 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.476963 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.499633 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.512128 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 28 17:03:22 crc kubenswrapper[4710]: I1128 17:03:22.707113 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:22.796821 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:22.952221 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.019724 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.023949 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.108723 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.131259 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.184031 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.259964 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.261887 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.339184 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.437020 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.471297 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.559153 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.586362 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.788523 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.814981 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.856554 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.913687 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.966818 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 28 17:03:23 crc kubenswrapper[4710]: I1128 17:03:23.992278 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.001772 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.008982 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.016664 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.047570 4710 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.099882 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.100046 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.179881 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.278563 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.306883 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.324799 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.399940 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.455444 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.528019 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.532560 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.777447 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.903931 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.920494 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 28 17:03:24 crc kubenswrapper[4710]: I1128 17:03:24.994548 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.016630 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.039826 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.042852 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.078496 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.123920 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.144752 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.263945 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.289552 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.389948 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.416563 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.426574 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.472274 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.476443 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.575324 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.581821 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.588584 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.619336 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.680842 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.745610 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.747004 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.770667 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.963559 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 28 17:03:25 crc kubenswrapper[4710]: I1128 17:03:25.998625 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.019998 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.103464 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.165829 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.195294 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.201505 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.225311 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.230304 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.248824 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.363618 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.368480 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.381966 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.409229 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.413383 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.468477 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.530274 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.547334 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.590033 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.810172 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.827792 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.855309 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.858510 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 28 17:03:26 crc kubenswrapper[4710]: I1128 17:03:26.997725 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.007345 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.036252 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.044142 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.119831 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.208224 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.227840 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.292030 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.309270 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.320100 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.507831 4710 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.621056 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.644279 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.799510 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.835562 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 28 17:03:27 crc kubenswrapper[4710]: I1128 17:03:27.877054 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.001655 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.025602 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.120866 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.152558 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.230686 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.360838 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.369479 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.408171 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.480433 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.519875 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.593225 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.651459 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.797648 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.798166 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.804147 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 28 17:03:28 crc kubenswrapper[4710]: I1128 17:03:28.972646 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.059365 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.091167 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.142732 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.167860 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.181754 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.198268 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.259679 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.306260 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.373774 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.379121 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.513004 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.594350 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.609570 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.787301 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.860080 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.883689 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.937055 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 28 17:03:29 crc kubenswrapper[4710]: I1128 17:03:29.954663 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.035424 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.148026 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.209839 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.279337 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.296897 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.429179 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.612790 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.757678 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 28 17:03:30 crc kubenswrapper[4710]: I1128 17:03:30.998680 4710 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.094088 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.101318 4710 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.101612 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6wpc2" podStartSLOduration=45.579339082 podStartE2EDuration="47.101590926s" podCreationTimestamp="2025-11-28 17:02:44 +0000 UTC" firstStartedPulling="2025-11-28 17:03:07.797604773 +0000 UTC m=+277.055904838" lastFinishedPulling="2025-11-28 17:03:09.319856607 +0000 UTC m=+278.578156682" observedRunningTime="2025-11-28 17:03:09.824127699 +0000 UTC m=+279.082427744" watchObservedRunningTime="2025-11-28 17:03:31.101590926 +0000 UTC m=+300.359890971" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.103589 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z8fvm" podStartSLOduration=46.906332313 podStartE2EDuration="50.103580403s" podCreationTimestamp="2025-11-28 17:02:41 +0000 UTC" firstStartedPulling="2025-11-28 17:02:42.541540702 +0000 UTC m=+251.799840747" lastFinishedPulling="2025-11-28 17:02:45.738788792 +0000 UTC m=+254.997088837" observedRunningTime="2025-11-28 17:03:06.519005963 +0000 UTC m=+275.777306008" watchObservedRunningTime="2025-11-28 17:03:31.103580403 +0000 UTC m=+300.361880448" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.104427 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c4w9z" podStartSLOduration=45.521541834 podStartE2EDuration="48.104422882s" podCreationTimestamp="2025-11-28 17:02:43 +0000 UTC" firstStartedPulling="2025-11-28 17:02:44.603710383 +0000 UTC m=+253.862010428" lastFinishedPulling="2025-11-28 17:02:47.186591421 +0000 UTC m=+256.444891476" observedRunningTime="2025-11-28 17:03:06.571824448 +0000 UTC m=+275.830124523" watchObservedRunningTime="2025-11-28 17:03:31.104422882 +0000 UTC m=+300.362722917" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.105514 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-v7m54"] Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.105564 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-cb86fb758-bg29m"] Nov 28 17:03:31 crc kubenswrapper[4710]: E1128 17:03:31.105740 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" containerName="installer" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.105769 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" containerName="installer" Nov 28 17:03:31 crc kubenswrapper[4710]: E1128 17:03:31.105778 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a82d2d7-4966-4dff-b1bf-5995aedd9fae" containerName="oauth-openshift" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.105785 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a82d2d7-4966-4dff-b1bf-5995aedd9fae" containerName="oauth-openshift" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.106388 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a82d2d7-4966-4dff-b1bf-5995aedd9fae" containerName="oauth-openshift" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.106418 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ca1472a-cb3f-49dd-bc30-ab277096f0e0" containerName="installer" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.106622 4710 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.106784 4710 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="451cc0a2-73a5-4317-9bb3-6b896a5ece97" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.106938 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.106859 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6wpc2"] Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.109119 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rtzhv"] Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.110823 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.111156 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.114228 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.116685 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.116987 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.117177 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.118957 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.120661 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.120809 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.121910 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.123224 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.123321 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.123448 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.125682 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131203 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131272 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-error\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131290 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131311 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131330 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-router-certs\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131524 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131610 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/60dfeeb5-4e0b-408d-b4c0-6c4752014502-audit-dir\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131648 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-session\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131671 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phxzv\" (UniqueName: \"kubernetes.io/projected/60dfeeb5-4e0b-408d-b4c0-6c4752014502-kube-api-access-phxzv\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131693 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131728 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-audit-policies\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131785 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131814 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-login\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.131865 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-service-ca\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.134728 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.145388 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.151060 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a82d2d7-4966-4dff-b1bf-5995aedd9fae" path="/var/lib/kubelet/pods/5a82d2d7-4966-4dff-b1bf-5995aedd9fae/volumes" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.158848 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.158827631 podStartE2EDuration="25.158827631s" podCreationTimestamp="2025-11-28 17:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:03:31.154657029 +0000 UTC m=+300.412957074" watchObservedRunningTime="2025-11-28 17:03:31.158827631 +0000 UTC m=+300.417127686" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.161442 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.229724 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233249 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-audit-policies\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233305 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233332 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-login\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233353 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-service-ca\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233413 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233455 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-error\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233470 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233491 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233506 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-router-certs\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233549 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233579 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/60dfeeb5-4e0b-408d-b4c0-6c4752014502-audit-dir\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233593 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-session\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233612 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phxzv\" (UniqueName: \"kubernetes.io/projected/60dfeeb5-4e0b-408d-b4c0-6c4752014502-kube-api-access-phxzv\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.233630 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.234127 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-cliconfig\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.234190 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-audit-policies\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.234563 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.235376 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-service-ca\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.235417 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/60dfeeb5-4e0b-408d-b4c0-6c4752014502-audit-dir\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.239611 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.239661 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-serving-cert\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.240139 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-login\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.240455 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-template-error\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.240524 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.241780 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.244203 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-session\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.247080 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/60dfeeb5-4e0b-408d-b4c0-6c4752014502-v4-0-config-system-router-certs\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.250691 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phxzv\" (UniqueName: \"kubernetes.io/projected/60dfeeb5-4e0b-408d-b4c0-6c4752014502-kube-api-access-phxzv\") pod \"oauth-openshift-cb86fb758-bg29m\" (UID: \"60dfeeb5-4e0b-408d-b4c0-6c4752014502\") " pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.268201 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.385103 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.438875 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.485585 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.507424 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.702530 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.713413 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.734150 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.816599 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.848790 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 17:03:31 crc kubenswrapper[4710]: I1128 17:03:31.952699 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 28 17:03:32 crc kubenswrapper[4710]: I1128 17:03:32.155019 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 28 17:03:32 crc kubenswrapper[4710]: I1128 17:03:32.183941 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 28 17:03:32 crc kubenswrapper[4710]: I1128 17:03:32.192711 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cb86fb758-bg29m"] Nov 28 17:03:32 crc kubenswrapper[4710]: I1128 17:03:32.342547 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 17:03:32 crc kubenswrapper[4710]: I1128 17:03:32.350375 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 28 17:03:32 crc kubenswrapper[4710]: I1128 17:03:32.434896 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-cb86fb758-bg29m"] Nov 28 17:03:32 crc kubenswrapper[4710]: I1128 17:03:32.553160 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 28 17:03:33 crc kubenswrapper[4710]: I1128 17:03:33.016656 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 28 17:03:33 crc kubenswrapper[4710]: I1128 17:03:33.069115 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" event={"ID":"60dfeeb5-4e0b-408d-b4c0-6c4752014502","Type":"ContainerStarted","Data":"d5c2cc8f98a21bc9c111229d7d4510aa0d4ffbb87fdc8f3c425396bb0d928862"} Nov 28 17:03:33 crc kubenswrapper[4710]: I1128 17:03:33.069354 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" event={"ID":"60dfeeb5-4e0b-408d-b4c0-6c4752014502","Type":"ContainerStarted","Data":"d050dc1e2785f344555163894d317f934a9365f92b33058aac8a1ba1a90c1ba3"} Nov 28 17:03:33 crc kubenswrapper[4710]: I1128 17:03:33.069426 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:33 crc kubenswrapper[4710]: I1128 17:03:33.091526 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" podStartSLOduration=59.091511765 podStartE2EDuration="59.091511765s" podCreationTimestamp="2025-11-28 17:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:03:33.087596983 +0000 UTC m=+302.345897028" watchObservedRunningTime="2025-11-28 17:03:33.091511765 +0000 UTC m=+302.349811810" Nov 28 17:03:33 crc kubenswrapper[4710]: I1128 17:03:33.132119 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-cb86fb758-bg29m" Nov 28 17:03:33 crc kubenswrapper[4710]: I1128 17:03:33.617079 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 28 17:03:33 crc kubenswrapper[4710]: I1128 17:03:33.828086 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 28 17:03:40 crc kubenswrapper[4710]: I1128 17:03:40.546941 4710 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 28 17:03:40 crc kubenswrapper[4710]: I1128 17:03:40.547942 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c450d131c74cdf7635ade48899c11000a16bebd6468c4619933413cabf7a4608" gracePeriod=5 Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.145720 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.146196 4710 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c450d131c74cdf7635ade48899c11000a16bebd6468c4619933413cabf7a4608" exitCode=137 Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.236713 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.236860 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357460 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357520 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357576 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357609 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357654 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357691 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357783 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357748 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.357842 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.358424 4710 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.358453 4710 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.358474 4710 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.358491 4710 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.370960 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:03:46 crc kubenswrapper[4710]: I1128 17:03:46.459355 4710 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:47 crc kubenswrapper[4710]: I1128 17:03:47.148142 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 28 17:03:47 crc kubenswrapper[4710]: I1128 17:03:47.152664 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 28 17:03:47 crc kubenswrapper[4710]: I1128 17:03:47.152733 4710 scope.go:117] "RemoveContainer" containerID="c450d131c74cdf7635ade48899c11000a16bebd6468c4619933413cabf7a4608" Nov 28 17:03:47 crc kubenswrapper[4710]: I1128 17:03:47.152808 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.811351 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gmx26"] Nov 28 17:03:49 crc kubenswrapper[4710]: E1128 17:03:49.812057 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.812104 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.812299 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.813567 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.815558 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.815953 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmx26"] Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.908050 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-utilities\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.908389 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-catalog-content\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:49 crc kubenswrapper[4710]: I1128 17:03:49.908418 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4zv8\" (UniqueName: \"kubernetes.io/projected/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-kube-api-access-s4zv8\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.010062 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-catalog-content\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.010130 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4zv8\" (UniqueName: \"kubernetes.io/projected/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-kube-api-access-s4zv8\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.010203 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-utilities\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.010743 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-catalog-content\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.010850 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-utilities\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.031803 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4zv8\" (UniqueName: \"kubernetes.io/projected/1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec-kube-api-access-s4zv8\") pod \"redhat-marketplace-gmx26\" (UID: \"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec\") " pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.137845 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.362232 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gmx26"] Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.401241 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fdmdc"] Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.401498 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" podUID="411f84b6-6676-4b0a-957c-eff49570cc88" containerName="controller-manager" containerID="cri-o://b16b1303a5147032df30a83d8d3b045358cf51d65b7b3d4fac8293c5a328f7a5" gracePeriod=30 Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.508680 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw"] Nov 28 17:03:50 crc kubenswrapper[4710]: I1128 17:03:50.508967 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" podUID="d933366c-bee9-4d19-8152-b4401d886b35" containerName="route-controller-manager" containerID="cri-o://7eab48f0a5d37cb5dc3f1ff0539c9cffe0f56f8796c129409b17079ee3ca7391" gracePeriod=30 Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.183654 4710 generic.go:334] "Generic (PLEG): container finished" podID="d933366c-bee9-4d19-8152-b4401d886b35" containerID="7eab48f0a5d37cb5dc3f1ff0539c9cffe0f56f8796c129409b17079ee3ca7391" exitCode=0 Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.183815 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" event={"ID":"d933366c-bee9-4d19-8152-b4401d886b35","Type":"ContainerDied","Data":"7eab48f0a5d37cb5dc3f1ff0539c9cffe0f56f8796c129409b17079ee3ca7391"} Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.186080 4710 generic.go:334] "Generic (PLEG): container finished" podID="411f84b6-6676-4b0a-957c-eff49570cc88" containerID="b16b1303a5147032df30a83d8d3b045358cf51d65b7b3d4fac8293c5a328f7a5" exitCode=0 Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.186182 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" event={"ID":"411f84b6-6676-4b0a-957c-eff49570cc88","Type":"ContainerDied","Data":"b16b1303a5147032df30a83d8d3b045358cf51d65b7b3d4fac8293c5a328f7a5"} Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.188308 4710 generic.go:334] "Generic (PLEG): container finished" podID="1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec" containerID="c60e80c1034939d1ed65ae74bca10fb6f741ad1995d4af8c20d4e4d0ab95e415" exitCode=0 Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.188356 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmx26" event={"ID":"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec","Type":"ContainerDied","Data":"c60e80c1034939d1ed65ae74bca10fb6f741ad1995d4af8c20d4e4d0ab95e415"} Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.188385 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmx26" event={"ID":"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec","Type":"ContainerStarted","Data":"833a3b9f9a145c8fcf64a2cdd03f7732787448332f4b7d649cacbb93ac7f19b1"} Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.378309 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.434500 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/411f84b6-6676-4b0a-957c-eff49570cc88-serving-cert\") pod \"411f84b6-6676-4b0a-957c-eff49570cc88\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.434541 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-proxy-ca-bundles\") pod \"411f84b6-6676-4b0a-957c-eff49570cc88\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.434564 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn8vv\" (UniqueName: \"kubernetes.io/projected/411f84b6-6676-4b0a-957c-eff49570cc88-kube-api-access-cn8vv\") pod \"411f84b6-6676-4b0a-957c-eff49570cc88\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.434633 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-client-ca\") pod \"411f84b6-6676-4b0a-957c-eff49570cc88\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.434655 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-config\") pod \"411f84b6-6676-4b0a-957c-eff49570cc88\" (UID: \"411f84b6-6676-4b0a-957c-eff49570cc88\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.435395 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "411f84b6-6676-4b0a-957c-eff49570cc88" (UID: "411f84b6-6676-4b0a-957c-eff49570cc88"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.435482 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-client-ca" (OuterVolumeSpecName: "client-ca") pod "411f84b6-6676-4b0a-957c-eff49570cc88" (UID: "411f84b6-6676-4b0a-957c-eff49570cc88"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.435665 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-config" (OuterVolumeSpecName: "config") pod "411f84b6-6676-4b0a-957c-eff49570cc88" (UID: "411f84b6-6676-4b0a-957c-eff49570cc88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.449171 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411f84b6-6676-4b0a-957c-eff49570cc88-kube-api-access-cn8vv" (OuterVolumeSpecName: "kube-api-access-cn8vv") pod "411f84b6-6676-4b0a-957c-eff49570cc88" (UID: "411f84b6-6676-4b0a-957c-eff49570cc88"). InnerVolumeSpecName "kube-api-access-cn8vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.457605 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411f84b6-6676-4b0a-957c-eff49570cc88-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "411f84b6-6676-4b0a-957c-eff49570cc88" (UID: "411f84b6-6676-4b0a-957c-eff49570cc88"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.532280 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535194 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-client-ca\") pod \"d933366c-bee9-4d19-8152-b4401d886b35\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535270 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d933366c-bee9-4d19-8152-b4401d886b35-serving-cert\") pod \"d933366c-bee9-4d19-8152-b4401d886b35\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535313 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c2t2\" (UniqueName: \"kubernetes.io/projected/d933366c-bee9-4d19-8152-b4401d886b35-kube-api-access-6c2t2\") pod \"d933366c-bee9-4d19-8152-b4401d886b35\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535350 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-config\") pod \"d933366c-bee9-4d19-8152-b4401d886b35\" (UID: \"d933366c-bee9-4d19-8152-b4401d886b35\") " Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535586 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/411f84b6-6676-4b0a-957c-eff49570cc88-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535601 4710 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535616 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn8vv\" (UniqueName: \"kubernetes.io/projected/411f84b6-6676-4b0a-957c-eff49570cc88-kube-api-access-cn8vv\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535628 4710 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.535639 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/411f84b6-6676-4b0a-957c-eff49570cc88-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.536436 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-config" (OuterVolumeSpecName: "config") pod "d933366c-bee9-4d19-8152-b4401d886b35" (UID: "d933366c-bee9-4d19-8152-b4401d886b35"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.536525 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-client-ca" (OuterVolumeSpecName: "client-ca") pod "d933366c-bee9-4d19-8152-b4401d886b35" (UID: "d933366c-bee9-4d19-8152-b4401d886b35"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.538775 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d933366c-bee9-4d19-8152-b4401d886b35-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d933366c-bee9-4d19-8152-b4401d886b35" (UID: "d933366c-bee9-4d19-8152-b4401d886b35"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.540260 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d933366c-bee9-4d19-8152-b4401d886b35-kube-api-access-6c2t2" (OuterVolumeSpecName: "kube-api-access-6c2t2") pod "d933366c-bee9-4d19-8152-b4401d886b35" (UID: "d933366c-bee9-4d19-8152-b4401d886b35"). InnerVolumeSpecName "kube-api-access-6c2t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.636883 4710 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.636918 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d933366c-bee9-4d19-8152-b4401d886b35-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.636948 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c2t2\" (UniqueName: \"kubernetes.io/projected/d933366c-bee9-4d19-8152-b4401d886b35-kube-api-access-6c2t2\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:51 crc kubenswrapper[4710]: I1128 17:03:51.636957 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d933366c-bee9-4d19-8152-b4401d886b35-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.194587 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.194570 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw" event={"ID":"d933366c-bee9-4d19-8152-b4401d886b35","Type":"ContainerDied","Data":"e679a26314d8e7cd8114263132128846c177171b90c2ece40118fddb1a4248e7"} Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.194728 4710 scope.go:117] "RemoveContainer" containerID="7eab48f0a5d37cb5dc3f1ff0539c9cffe0f56f8796c129409b17079ee3ca7391" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.196325 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" event={"ID":"411f84b6-6676-4b0a-957c-eff49570cc88","Type":"ContainerDied","Data":"21c2f1b725c6613aaffcc1ca23f4fbadf114b19dac4d18b9bae286781c8bfdeb"} Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.196387 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fdmdc" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.211148 4710 scope.go:117] "RemoveContainer" containerID="b16b1303a5147032df30a83d8d3b045358cf51d65b7b3d4fac8293c5a328f7a5" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.224662 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw"] Nov 28 17:03:52 crc kubenswrapper[4710]: E1128 17:03:52.225033 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411f84b6-6676-4b0a-957c-eff49570cc88" containerName="controller-manager" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.225056 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="411f84b6-6676-4b0a-957c-eff49570cc88" containerName="controller-manager" Nov 28 17:03:52 crc kubenswrapper[4710]: E1128 17:03:52.225069 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d933366c-bee9-4d19-8152-b4401d886b35" containerName="route-controller-manager" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.225076 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d933366c-bee9-4d19-8152-b4401d886b35" containerName="route-controller-manager" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.225176 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="d933366c-bee9-4d19-8152-b4401d886b35" containerName="route-controller-manager" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.225188 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="411f84b6-6676-4b0a-957c-eff49570cc88" containerName="controller-manager" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.225587 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.228438 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.228733 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.228894 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.229056 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.229162 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.229258 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-679d948996-jjcpd"] Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.230178 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.231309 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.231522 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.232446 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fdmdc"] Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.234281 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.234612 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.234918 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.234944 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.235149 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.238423 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fdmdc"] Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.242642 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.247175 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-679d948996-jjcpd"] Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248238 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czgzj\" (UniqueName: \"kubernetes.io/projected/b7d25392-1fae-413d-ad03-20f53f1ac112-kube-api-access-czgzj\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248296 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-client-ca\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248328 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-config\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248364 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhdv\" (UniqueName: \"kubernetes.io/projected/0ea3c254-1948-428e-a9af-4390bb516cea-kube-api-access-5nhdv\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248425 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-proxy-ca-bundles\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248461 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-client-ca\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248489 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea3c254-1948-428e-a9af-4390bb516cea-serving-cert\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248531 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d25392-1fae-413d-ad03-20f53f1ac112-serving-cert\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.248564 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-config\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.250340 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw"] Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.262082 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw"] Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.278595 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-kr9gw"] Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349557 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-config\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349660 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czgzj\" (UniqueName: \"kubernetes.io/projected/b7d25392-1fae-413d-ad03-20f53f1ac112-kube-api-access-czgzj\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349699 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-client-ca\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349726 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-config\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349790 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhdv\" (UniqueName: \"kubernetes.io/projected/0ea3c254-1948-428e-a9af-4390bb516cea-kube-api-access-5nhdv\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349849 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-proxy-ca-bundles\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349876 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-client-ca\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349903 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea3c254-1948-428e-a9af-4390bb516cea-serving-cert\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.349944 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d25392-1fae-413d-ad03-20f53f1ac112-serving-cert\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.351226 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-client-ca\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.351350 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-config\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.351354 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-client-ca\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.351390 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-proxy-ca-bundles\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.352234 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-config\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.355316 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea3c254-1948-428e-a9af-4390bb516cea-serving-cert\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.357357 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d25392-1fae-413d-ad03-20f53f1ac112-serving-cert\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.367030 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czgzj\" (UniqueName: \"kubernetes.io/projected/b7d25392-1fae-413d-ad03-20f53f1ac112-kube-api-access-czgzj\") pod \"route-controller-manager-dc5d576d8-p6lhw\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.383600 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhdv\" (UniqueName: \"kubernetes.io/projected/0ea3c254-1948-428e-a9af-4390bb516cea-kube-api-access-5nhdv\") pod \"controller-manager-679d948996-jjcpd\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.551350 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.565179 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.745991 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw"] Nov 28 17:03:52 crc kubenswrapper[4710]: I1128 17:03:52.768519 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-679d948996-jjcpd"] Nov 28 17:03:52 crc kubenswrapper[4710]: W1128 17:03:52.778581 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ea3c254_1948_428e_a9af_4390bb516cea.slice/crio-355afcac37a985192f4725f148448a3e58a8b0807329df744d59ffcde133fc5c WatchSource:0}: Error finding container 355afcac37a985192f4725f148448a3e58a8b0807329df744d59ffcde133fc5c: Status 404 returned error can't find the container with id 355afcac37a985192f4725f148448a3e58a8b0807329df744d59ffcde133fc5c Nov 28 17:03:53 crc kubenswrapper[4710]: I1128 17:03:53.149439 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411f84b6-6676-4b0a-957c-eff49570cc88" path="/var/lib/kubelet/pods/411f84b6-6676-4b0a-957c-eff49570cc88/volumes" Nov 28 17:03:53 crc kubenswrapper[4710]: I1128 17:03:53.150684 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d933366c-bee9-4d19-8152-b4401d886b35" path="/var/lib/kubelet/pods/d933366c-bee9-4d19-8152-b4401d886b35/volumes" Nov 28 17:03:53 crc kubenswrapper[4710]: I1128 17:03:53.203139 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" event={"ID":"b7d25392-1fae-413d-ad03-20f53f1ac112","Type":"ContainerStarted","Data":"d50c7c2af70f992864eac8deaea9a6384b33dd4aa01d9b5b5d26b82253334950"} Nov 28 17:03:53 crc kubenswrapper[4710]: I1128 17:03:53.204290 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" event={"ID":"0ea3c254-1948-428e-a9af-4390bb516cea","Type":"ContainerStarted","Data":"355afcac37a985192f4725f148448a3e58a8b0807329df744d59ffcde133fc5c"} Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.211892 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" event={"ID":"b7d25392-1fae-413d-ad03-20f53f1ac112","Type":"ContainerStarted","Data":"749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d"} Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.212257 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.213875 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" event={"ID":"0ea3c254-1948-428e-a9af-4390bb516cea","Type":"ContainerStarted","Data":"51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b"} Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.214088 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.216224 4710 generic.go:334] "Generic (PLEG): container finished" podID="1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec" containerID="d18eab3550f81e6ae38d75663df0014d1bcd61d50611c1e78023feead395f7f1" exitCode=0 Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.216263 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmx26" event={"ID":"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec","Type":"ContainerDied","Data":"d18eab3550f81e6ae38d75663df0014d1bcd61d50611c1e78023feead395f7f1"} Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.218665 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.220533 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.264114 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" podStartSLOduration=4.264092125 podStartE2EDuration="4.264092125s" podCreationTimestamp="2025-11-28 17:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:03:54.264030413 +0000 UTC m=+323.522330478" watchObservedRunningTime="2025-11-28 17:03:54.264092125 +0000 UTC m=+323.522392170" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.265158 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" podStartSLOduration=4.265146721 podStartE2EDuration="4.265146721s" podCreationTimestamp="2025-11-28 17:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:03:54.242090412 +0000 UTC m=+323.500390457" watchObservedRunningTime="2025-11-28 17:03:54.265146721 +0000 UTC m=+323.523446766" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.405339 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sl4b6"] Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.406839 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.418938 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl4b6"] Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.481051 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78802154-2da1-4554-92d3-20994dfac727-utilities\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.481328 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8tsc\" (UniqueName: \"kubernetes.io/projected/78802154-2da1-4554-92d3-20994dfac727-kube-api-access-w8tsc\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.481367 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78802154-2da1-4554-92d3-20994dfac727-catalog-content\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.582180 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78802154-2da1-4554-92d3-20994dfac727-utilities\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.582226 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8tsc\" (UniqueName: \"kubernetes.io/projected/78802154-2da1-4554-92d3-20994dfac727-kube-api-access-w8tsc\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.582250 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78802154-2da1-4554-92d3-20994dfac727-catalog-content\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.582894 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78802154-2da1-4554-92d3-20994dfac727-catalog-content\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.582906 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78802154-2da1-4554-92d3-20994dfac727-utilities\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.603666 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8tsc\" (UniqueName: \"kubernetes.io/projected/78802154-2da1-4554-92d3-20994dfac727-kube-api-access-w8tsc\") pod \"redhat-marketplace-sl4b6\" (UID: \"78802154-2da1-4554-92d3-20994dfac727\") " pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.732125 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.855693 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 28 17:03:54 crc kubenswrapper[4710]: I1128 17:03:54.956057 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sl4b6"] Nov 28 17:03:54 crc kubenswrapper[4710]: W1128 17:03:54.964229 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78802154_2da1_4554_92d3_20994dfac727.slice/crio-3faf9619e7004e5bd78117dddd2924aae9d7e31985f1c524fd0f41a4c2f7be43 WatchSource:0}: Error finding container 3faf9619e7004e5bd78117dddd2924aae9d7e31985f1c524fd0f41a4c2f7be43: Status 404 returned error can't find the container with id 3faf9619e7004e5bd78117dddd2924aae9d7e31985f1c524fd0f41a4c2f7be43 Nov 28 17:03:55 crc kubenswrapper[4710]: I1128 17:03:55.223004 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gmx26" event={"ID":"1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec","Type":"ContainerStarted","Data":"5b28021572cece44ef0914fec63d7bc2a7579af231d86d2e674e471e7a24791a"} Nov 28 17:03:55 crc kubenswrapper[4710]: I1128 17:03:55.224179 4710 generic.go:334] "Generic (PLEG): container finished" podID="78802154-2da1-4554-92d3-20994dfac727" containerID="b92d0082d9a95a1070d3f21bb2e2eeb06b5be6d9979e600ce27c1102869ce897" exitCode=0 Nov 28 17:03:55 crc kubenswrapper[4710]: I1128 17:03:55.224239 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl4b6" event={"ID":"78802154-2da1-4554-92d3-20994dfac727","Type":"ContainerDied","Data":"b92d0082d9a95a1070d3f21bb2e2eeb06b5be6d9979e600ce27c1102869ce897"} Nov 28 17:03:55 crc kubenswrapper[4710]: I1128 17:03:55.224295 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl4b6" event={"ID":"78802154-2da1-4554-92d3-20994dfac727","Type":"ContainerStarted","Data":"3faf9619e7004e5bd78117dddd2924aae9d7e31985f1c524fd0f41a4c2f7be43"} Nov 28 17:03:55 crc kubenswrapper[4710]: I1128 17:03:55.242994 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gmx26" podStartSLOduration=2.575145352 podStartE2EDuration="6.242967206s" podCreationTimestamp="2025-11-28 17:03:49 +0000 UTC" firstStartedPulling="2025-11-28 17:03:51.190906447 +0000 UTC m=+320.449206532" lastFinishedPulling="2025-11-28 17:03:54.858728341 +0000 UTC m=+324.117028386" observedRunningTime="2025-11-28 17:03:55.242012793 +0000 UTC m=+324.500312868" watchObservedRunningTime="2025-11-28 17:03:55.242967206 +0000 UTC m=+324.501267251" Nov 28 17:03:56 crc kubenswrapper[4710]: I1128 17:03:56.148833 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" podUID="48374daa-0613-4fe0-94a5-311e48a3979f" containerName="registry" containerID="cri-o://65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71" gracePeriod=30 Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.127789 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.217021 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-registry-certificates\") pod \"48374daa-0613-4fe0-94a5-311e48a3979f\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.217158 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"48374daa-0613-4fe0-94a5-311e48a3979f\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.217185 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-registry-tls\") pod \"48374daa-0613-4fe0-94a5-311e48a3979f\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.217211 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-bound-sa-token\") pod \"48374daa-0613-4fe0-94a5-311e48a3979f\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.217229 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx8vn\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-kube-api-access-hx8vn\") pod \"48374daa-0613-4fe0-94a5-311e48a3979f\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.217272 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48374daa-0613-4fe0-94a5-311e48a3979f-installation-pull-secrets\") pod \"48374daa-0613-4fe0-94a5-311e48a3979f\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.217294 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-trusted-ca\") pod \"48374daa-0613-4fe0-94a5-311e48a3979f\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.217322 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48374daa-0613-4fe0-94a5-311e48a3979f-ca-trust-extracted\") pod \"48374daa-0613-4fe0-94a5-311e48a3979f\" (UID: \"48374daa-0613-4fe0-94a5-311e48a3979f\") " Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.218736 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "48374daa-0613-4fe0-94a5-311e48a3979f" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.219072 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "48374daa-0613-4fe0-94a5-311e48a3979f" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.228012 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "48374daa-0613-4fe0-94a5-311e48a3979f" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.228275 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-kube-api-access-hx8vn" (OuterVolumeSpecName: "kube-api-access-hx8vn") pod "48374daa-0613-4fe0-94a5-311e48a3979f" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f"). InnerVolumeSpecName "kube-api-access-hx8vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.229521 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "48374daa-0613-4fe0-94a5-311e48a3979f" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.235189 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "48374daa-0613-4fe0-94a5-311e48a3979f" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.235697 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl4b6" event={"ID":"78802154-2da1-4554-92d3-20994dfac727","Type":"ContainerStarted","Data":"c873979e8dcfe5cec231066bb8e84e0318f0a735d7ce0b62c588d699da6951e9"} Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.237219 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48374daa-0613-4fe0-94a5-311e48a3979f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "48374daa-0613-4fe0-94a5-311e48a3979f" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.237482 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48374daa-0613-4fe0-94a5-311e48a3979f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "48374daa-0613-4fe0-94a5-311e48a3979f" (UID: "48374daa-0613-4fe0-94a5-311e48a3979f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.237560 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.237501 4710 generic.go:334] "Generic (PLEG): container finished" podID="48374daa-0613-4fe0-94a5-311e48a3979f" containerID="65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71" exitCode=0 Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.237887 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" event={"ID":"48374daa-0613-4fe0-94a5-311e48a3979f","Type":"ContainerDied","Data":"65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71"} Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.237989 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rtzhv" event={"ID":"48374daa-0613-4fe0-94a5-311e48a3979f","Type":"ContainerDied","Data":"3fe8f1b7873f9c01da6ae83529b9d567f4fc2c146b9368607638ba18510fe35d"} Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.238082 4710 scope.go:117] "RemoveContainer" containerID="65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.319987 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.320299 4710 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/48374daa-0613-4fe0-94a5-311e48a3979f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.320317 4710 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/48374daa-0613-4fe0-94a5-311e48a3979f-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.320337 4710 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.320348 4710 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.320360 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx8vn\" (UniqueName: \"kubernetes.io/projected/48374daa-0613-4fe0-94a5-311e48a3979f-kube-api-access-hx8vn\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.320370 4710 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/48374daa-0613-4fe0-94a5-311e48a3979f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.334253 4710 scope.go:117] "RemoveContainer" containerID="65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71" Nov 28 17:03:57 crc kubenswrapper[4710]: E1128 17:03:57.334799 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71\": container with ID starting with 65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71 not found: ID does not exist" containerID="65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.334861 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71"} err="failed to get container status \"65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71\": rpc error: code = NotFound desc = could not find container \"65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71\": container with ID starting with 65893cda9408d0a55df85b069ab51f8ff5914b4f569048a92854dd387f354e71 not found: ID does not exist" Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.344786 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rtzhv"] Nov 28 17:03:57 crc kubenswrapper[4710]: I1128 17:03:57.349890 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rtzhv"] Nov 28 17:03:58 crc kubenswrapper[4710]: I1128 17:03:58.246696 4710 generic.go:334] "Generic (PLEG): container finished" podID="78802154-2da1-4554-92d3-20994dfac727" containerID="c873979e8dcfe5cec231066bb8e84e0318f0a735d7ce0b62c588d699da6951e9" exitCode=0 Nov 28 17:03:58 crc kubenswrapper[4710]: I1128 17:03:58.246778 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl4b6" event={"ID":"78802154-2da1-4554-92d3-20994dfac727","Type":"ContainerDied","Data":"c873979e8dcfe5cec231066bb8e84e0318f0a735d7ce0b62c588d699da6951e9"} Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.149657 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48374daa-0613-4fe0-94a5-311e48a3979f" path="/var/lib/kubelet/pods/48374daa-0613-4fe0-94a5-311e48a3979f/volumes" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.204502 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rlt"] Nov 28 17:03:59 crc kubenswrapper[4710]: E1128 17:03:59.204714 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48374daa-0613-4fe0-94a5-311e48a3979f" containerName="registry" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.204726 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="48374daa-0613-4fe0-94a5-311e48a3979f" containerName="registry" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.204834 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="48374daa-0613-4fe0-94a5-311e48a3979f" containerName="registry" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.205539 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.222918 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rlt"] Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.247155 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31d7e40a-6e97-4337-be6d-4f93a852e342-utilities\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.247210 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpj8c\" (UniqueName: \"kubernetes.io/projected/31d7e40a-6e97-4337-be6d-4f93a852e342-kube-api-access-jpj8c\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.247314 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31d7e40a-6e97-4337-be6d-4f93a852e342-catalog-content\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.259733 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sl4b6" event={"ID":"78802154-2da1-4554-92d3-20994dfac727","Type":"ContainerStarted","Data":"082775bcb66bcd54dba877ee9e37e3a7c8def5e5ccb57b36d790705e23074fb3"} Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.275116 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sl4b6" podStartSLOduration=1.704179915 podStartE2EDuration="5.275098855s" podCreationTimestamp="2025-11-28 17:03:54 +0000 UTC" firstStartedPulling="2025-11-28 17:03:55.225514606 +0000 UTC m=+324.483814651" lastFinishedPulling="2025-11-28 17:03:58.796433546 +0000 UTC m=+328.054733591" observedRunningTime="2025-11-28 17:03:59.272783922 +0000 UTC m=+328.531083977" watchObservedRunningTime="2025-11-28 17:03:59.275098855 +0000 UTC m=+328.533398890" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.348951 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31d7e40a-6e97-4337-be6d-4f93a852e342-utilities\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.349054 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpj8c\" (UniqueName: \"kubernetes.io/projected/31d7e40a-6e97-4337-be6d-4f93a852e342-kube-api-access-jpj8c\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.349134 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31d7e40a-6e97-4337-be6d-4f93a852e342-catalog-content\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.349647 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31d7e40a-6e97-4337-be6d-4f93a852e342-catalog-content\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.349902 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31d7e40a-6e97-4337-be6d-4f93a852e342-utilities\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.371250 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpj8c\" (UniqueName: \"kubernetes.io/projected/31d7e40a-6e97-4337-be6d-4f93a852e342-kube-api-access-jpj8c\") pod \"redhat-marketplace-k8rlt\" (UID: \"31d7e40a-6e97-4337-be6d-4f93a852e342\") " pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.520659 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:03:59 crc kubenswrapper[4710]: I1128 17:03:59.914435 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8rlt"] Nov 28 17:04:00 crc kubenswrapper[4710]: I1128 17:04:00.138539 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:04:00 crc kubenswrapper[4710]: I1128 17:04:00.138598 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:04:00 crc kubenswrapper[4710]: I1128 17:04:00.195148 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:04:00 crc kubenswrapper[4710]: I1128 17:04:00.264564 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rlt" event={"ID":"31d7e40a-6e97-4337-be6d-4f93a852e342","Type":"ContainerStarted","Data":"7a849400edc7f0a2a0183f50d3070ecdb3f0cc6fee9ff40866f964c257f91f51"} Nov 28 17:04:00 crc kubenswrapper[4710]: I1128 17:04:00.300776 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gmx26" Nov 28 17:04:01 crc kubenswrapper[4710]: I1128 17:04:01.270888 4710 generic.go:334] "Generic (PLEG): container finished" podID="31d7e40a-6e97-4337-be6d-4f93a852e342" containerID="9034b7f8337b9c01c01d76811b0614a07ba53365655992935431b201af574387" exitCode=0 Nov 28 17:04:01 crc kubenswrapper[4710]: I1128 17:04:01.270938 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rlt" event={"ID":"31d7e40a-6e97-4337-be6d-4f93a852e342","Type":"ContainerDied","Data":"9034b7f8337b9c01c01d76811b0614a07ba53365655992935431b201af574387"} Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.004728 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vdv6c"] Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.006284 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.016505 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdv6c"] Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.098256 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-utilities\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.098435 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfm6f\" (UniqueName: \"kubernetes.io/projected/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-kube-api-access-zfm6f\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.098501 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-catalog-content\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.199658 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-utilities\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.199824 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfm6f\" (UniqueName: \"kubernetes.io/projected/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-kube-api-access-zfm6f\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.199889 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-catalog-content\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.200221 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-utilities\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.200251 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-catalog-content\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.219974 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfm6f\" (UniqueName: \"kubernetes.io/projected/d8d16a8e-94b3-4552-873c-a100d1fa8bc6-kube-api-access-zfm6f\") pod \"redhat-marketplace-vdv6c\" (UID: \"d8d16a8e-94b3-4552-873c-a100d1fa8bc6\") " pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.284708 4710 generic.go:334] "Generic (PLEG): container finished" podID="31d7e40a-6e97-4337-be6d-4f93a852e342" containerID="adabcdaaf6f6b5682c6f852b4f2f06faa54e91982f6696dd2951ebd789d0ed8d" exitCode=0 Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.284776 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rlt" event={"ID":"31d7e40a-6e97-4337-be6d-4f93a852e342","Type":"ContainerDied","Data":"adabcdaaf6f6b5682c6f852b4f2f06faa54e91982f6696dd2951ebd789d0ed8d"} Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.335115 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:03 crc kubenswrapper[4710]: I1128 17:04:03.727190 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vdv6c"] Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.204685 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbpv"] Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.215191 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.226930 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbpv"] Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.291872 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8rlt" event={"ID":"31d7e40a-6e97-4337-be6d-4f93a852e342","Type":"ContainerStarted","Data":"007404c8629244b7b2391aa2836bfa6cfd6aa90f018ebc952954c2e58ff571ae"} Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.293204 4710 generic.go:334] "Generic (PLEG): container finished" podID="d8d16a8e-94b3-4552-873c-a100d1fa8bc6" containerID="051d33a231f7bb9da68e73a1fdc59640c2153290bc433edba01272876a7f8d05" exitCode=0 Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.293247 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdv6c" event={"ID":"d8d16a8e-94b3-4552-873c-a100d1fa8bc6","Type":"ContainerDied","Data":"051d33a231f7bb9da68e73a1fdc59640c2153290bc433edba01272876a7f8d05"} Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.293275 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdv6c" event={"ID":"d8d16a8e-94b3-4552-873c-a100d1fa8bc6","Type":"ContainerStarted","Data":"35cf861de4220214e5ad6f8e504422b23176fd7383eaa40af5813d26053e6603"} Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.314139 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff5b3d8-f488-487f-9407-07c88e139d95-utilities\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.314193 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r54r\" (UniqueName: \"kubernetes.io/projected/aff5b3d8-f488-487f-9407-07c88e139d95-kube-api-access-9r54r\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.314317 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff5b3d8-f488-487f-9407-07c88e139d95-catalog-content\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.337504 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k8rlt" podStartSLOduration=2.731847274 podStartE2EDuration="5.337487461s" podCreationTimestamp="2025-11-28 17:03:59 +0000 UTC" firstStartedPulling="2025-11-28 17:04:01.272278272 +0000 UTC m=+330.530578317" lastFinishedPulling="2025-11-28 17:04:03.877918459 +0000 UTC m=+333.136218504" observedRunningTime="2025-11-28 17:04:04.311473692 +0000 UTC m=+333.569773747" watchObservedRunningTime="2025-11-28 17:04:04.337487461 +0000 UTC m=+333.595787496" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.415075 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff5b3d8-f488-487f-9407-07c88e139d95-catalog-content\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.415200 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff5b3d8-f488-487f-9407-07c88e139d95-utilities\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.415230 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r54r\" (UniqueName: \"kubernetes.io/projected/aff5b3d8-f488-487f-9407-07c88e139d95-kube-api-access-9r54r\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.416150 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aff5b3d8-f488-487f-9407-07c88e139d95-catalog-content\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.416328 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aff5b3d8-f488-487f-9407-07c88e139d95-utilities\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.438171 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r54r\" (UniqueName: \"kubernetes.io/projected/aff5b3d8-f488-487f-9407-07c88e139d95-kube-api-access-9r54r\") pod \"redhat-marketplace-tdbpv\" (UID: \"aff5b3d8-f488-487f-9407-07c88e139d95\") " pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.534956 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.732865 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.733209 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.770352 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:04:04 crc kubenswrapper[4710]: I1128 17:04:04.935528 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tdbpv"] Nov 28 17:04:04 crc kubenswrapper[4710]: W1128 17:04:04.938576 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff5b3d8_f488_487f_9407_07c88e139d95.slice/crio-4ab721ad0b10d0713b8b09e98ef5eb52b55f05fd6d26b891fb77b99eae2cd75c WatchSource:0}: Error finding container 4ab721ad0b10d0713b8b09e98ef5eb52b55f05fd6d26b891fb77b99eae2cd75c: Status 404 returned error can't find the container with id 4ab721ad0b10d0713b8b09e98ef5eb52b55f05fd6d26b891fb77b99eae2cd75c Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.299459 4710 generic.go:334] "Generic (PLEG): container finished" podID="aff5b3d8-f488-487f-9407-07c88e139d95" containerID="9465efc89dd69ec32e0b6463df372609a36c376ff751f5a2de8fce9836238d27" exitCode=0 Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.299520 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbpv" event={"ID":"aff5b3d8-f488-487f-9407-07c88e139d95","Type":"ContainerDied","Data":"9465efc89dd69ec32e0b6463df372609a36c376ff751f5a2de8fce9836238d27"} Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.299892 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbpv" event={"ID":"aff5b3d8-f488-487f-9407-07c88e139d95","Type":"ContainerStarted","Data":"4ab721ad0b10d0713b8b09e98ef5eb52b55f05fd6d26b891fb77b99eae2cd75c"} Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.343147 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sl4b6" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.606184 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m5srj"] Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.607190 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.620185 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5srj"] Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.632053 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7chwb\" (UniqueName: \"kubernetes.io/projected/32d30eea-067e-4b8c-8bd4-a6dd02440a71-kube-api-access-7chwb\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.632323 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d30eea-067e-4b8c-8bd4-a6dd02440a71-catalog-content\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.632366 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d30eea-067e-4b8c-8bd4-a6dd02440a71-utilities\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.734154 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7chwb\" (UniqueName: \"kubernetes.io/projected/32d30eea-067e-4b8c-8bd4-a6dd02440a71-kube-api-access-7chwb\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.734338 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d30eea-067e-4b8c-8bd4-a6dd02440a71-catalog-content\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.734371 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d30eea-067e-4b8c-8bd4-a6dd02440a71-utilities\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.735377 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32d30eea-067e-4b8c-8bd4-a6dd02440a71-utilities\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.735498 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32d30eea-067e-4b8c-8bd4-a6dd02440a71-catalog-content\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.758269 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7chwb\" (UniqueName: \"kubernetes.io/projected/32d30eea-067e-4b8c-8bd4-a6dd02440a71-kube-api-access-7chwb\") pod \"redhat-marketplace-m5srj\" (UID: \"32d30eea-067e-4b8c-8bd4-a6dd02440a71\") " pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:05 crc kubenswrapper[4710]: I1128 17:04:05.978686 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.305938 4710 generic.go:334] "Generic (PLEG): container finished" podID="d8d16a8e-94b3-4552-873c-a100d1fa8bc6" containerID="ed20fd345102a966b736dae501059bcbb5046d0d110afc3531c3ca407f82a76a" exitCode=0 Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.305993 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdv6c" event={"ID":"d8d16a8e-94b3-4552-873c-a100d1fa8bc6","Type":"ContainerDied","Data":"ed20fd345102a966b736dae501059bcbb5046d0d110afc3531c3ca407f82a76a"} Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.395794 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m5srj"] Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.805687 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5l7l6"] Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.806929 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.816905 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5l7l6"] Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.854161 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41bc6f92-7755-4ed7-94ab-b21b82284a9f-catalog-content\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.854219 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4md\" (UniqueName: \"kubernetes.io/projected/41bc6f92-7755-4ed7-94ab-b21b82284a9f-kube-api-access-tl4md\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.854465 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41bc6f92-7755-4ed7-94ab-b21b82284a9f-utilities\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.956509 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41bc6f92-7755-4ed7-94ab-b21b82284a9f-catalog-content\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.956834 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4md\" (UniqueName: \"kubernetes.io/projected/41bc6f92-7755-4ed7-94ab-b21b82284a9f-kube-api-access-tl4md\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.957127 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41bc6f92-7755-4ed7-94ab-b21b82284a9f-catalog-content\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.957567 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41bc6f92-7755-4ed7-94ab-b21b82284a9f-utilities\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.958091 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41bc6f92-7755-4ed7-94ab-b21b82284a9f-utilities\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:06 crc kubenswrapper[4710]: I1128 17:04:06.973906 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4md\" (UniqueName: \"kubernetes.io/projected/41bc6f92-7755-4ed7-94ab-b21b82284a9f-kube-api-access-tl4md\") pod \"redhat-marketplace-5l7l6\" (UID: \"41bc6f92-7755-4ed7-94ab-b21b82284a9f\") " pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.120563 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.325245 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vdv6c" event={"ID":"d8d16a8e-94b3-4552-873c-a100d1fa8bc6","Type":"ContainerStarted","Data":"55868d3ed514fa3508f7fd6a281223c986aa343de35ea15ea85ae2ed9dadcfef"} Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.328533 4710 generic.go:334] "Generic (PLEG): container finished" podID="aff5b3d8-f488-487f-9407-07c88e139d95" containerID="bbddc4cd460a2c5810cb2980b59d1ccf94431dcbda1e553730f174489f8706b8" exitCode=0 Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.328600 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbpv" event={"ID":"aff5b3d8-f488-487f-9407-07c88e139d95","Type":"ContainerDied","Data":"bbddc4cd460a2c5810cb2980b59d1ccf94431dcbda1e553730f174489f8706b8"} Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.331962 4710 generic.go:334] "Generic (PLEG): container finished" podID="32d30eea-067e-4b8c-8bd4-a6dd02440a71" containerID="49c442db975f40fd16c44c52a56510a9bbac46026d83d511e3cb453ad4f4b4d8" exitCode=0 Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.332039 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5srj" event={"ID":"32d30eea-067e-4b8c-8bd4-a6dd02440a71","Type":"ContainerDied","Data":"49c442db975f40fd16c44c52a56510a9bbac46026d83d511e3cb453ad4f4b4d8"} Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.332067 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5srj" event={"ID":"32d30eea-067e-4b8c-8bd4-a6dd02440a71","Type":"ContainerStarted","Data":"938c6e4f6848a8cc81ce324c85431aa891797caba47ff20e5a0348f01a3fd3e7"} Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.349859 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vdv6c" podStartSLOduration=2.781647614 podStartE2EDuration="5.349839054s" podCreationTimestamp="2025-11-28 17:04:02 +0000 UTC" firstStartedPulling="2025-11-28 17:04:04.294522144 +0000 UTC m=+333.552822189" lastFinishedPulling="2025-11-28 17:04:06.862713584 +0000 UTC m=+336.121013629" observedRunningTime="2025-11-28 17:04:07.344119176 +0000 UTC m=+336.602419241" watchObservedRunningTime="2025-11-28 17:04:07.349839054 +0000 UTC m=+336.608139099" Nov 28 17:04:07 crc kubenswrapper[4710]: I1128 17:04:07.542559 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5l7l6"] Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.009380 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkl"] Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.010677 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.019656 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkl"] Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.080439 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95fd8509-97bd-4d02-87d5-3593b426ef44-catalog-content\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.080494 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjjjk\" (UniqueName: \"kubernetes.io/projected/95fd8509-97bd-4d02-87d5-3593b426ef44-kube-api-access-sjjjk\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.080570 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95fd8509-97bd-4d02-87d5-3593b426ef44-utilities\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.182057 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjjjk\" (UniqueName: \"kubernetes.io/projected/95fd8509-97bd-4d02-87d5-3593b426ef44-kube-api-access-sjjjk\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.182174 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95fd8509-97bd-4d02-87d5-3593b426ef44-utilities\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.182257 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95fd8509-97bd-4d02-87d5-3593b426ef44-catalog-content\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.182732 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95fd8509-97bd-4d02-87d5-3593b426ef44-catalog-content\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.182895 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95fd8509-97bd-4d02-87d5-3593b426ef44-utilities\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.200690 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjjjk\" (UniqueName: \"kubernetes.io/projected/95fd8509-97bd-4d02-87d5-3593b426ef44-kube-api-access-sjjjk\") pod \"redhat-marketplace-khfkl\" (UID: \"95fd8509-97bd-4d02-87d5-3593b426ef44\") " pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.338254 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5srj" event={"ID":"32d30eea-067e-4b8c-8bd4-a6dd02440a71","Type":"ContainerStarted","Data":"961dbfb35c818570f81a536fea4b3638d703550d63ff387ef16c1ec0080db1e4"} Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.341957 4710 generic.go:334] "Generic (PLEG): container finished" podID="41bc6f92-7755-4ed7-94ab-b21b82284a9f" containerID="247767fb531634803d8c0fc5c3f360c96e86978d044af319448c7ae9415cd297" exitCode=0 Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.342609 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l7l6" event={"ID":"41bc6f92-7755-4ed7-94ab-b21b82284a9f","Type":"ContainerDied","Data":"247767fb531634803d8c0fc5c3f360c96e86978d044af319448c7ae9415cd297"} Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.342679 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l7l6" event={"ID":"41bc6f92-7755-4ed7-94ab-b21b82284a9f","Type":"ContainerStarted","Data":"0e86ef7ae76fb50fcd095d6c293e5a38002fc693072ade9c9b06ca00b64c6dc9"} Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.346689 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:08 crc kubenswrapper[4710]: I1128 17:04:08.784963 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-khfkl"] Nov 28 17:04:08 crc kubenswrapper[4710]: W1128 17:04:08.792402 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95fd8509_97bd_4d02_87d5_3593b426ef44.slice/crio-f9fb942bee1854f29ff108ff966373b4cbd5e6cbe711dec9924b33e2217353d7 WatchSource:0}: Error finding container f9fb942bee1854f29ff108ff966373b4cbd5e6cbe711dec9924b33e2217353d7: Status 404 returned error can't find the container with id f9fb942bee1854f29ff108ff966373b4cbd5e6cbe711dec9924b33e2217353d7 Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.203883 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dq7q7"] Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.205226 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.221714 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dq7q7"] Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.308184 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbcd726-3ba8-41eb-9b6c-9648483ec935-catalog-content\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.308247 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbcd726-3ba8-41eb-9b6c-9648483ec935-utilities\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.308324 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9znp\" (UniqueName: \"kubernetes.io/projected/6fbcd726-3ba8-41eb-9b6c-9648483ec935-kube-api-access-d9znp\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.349588 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tdbpv" event={"ID":"aff5b3d8-f488-487f-9407-07c88e139d95","Type":"ContainerStarted","Data":"99b800bdd3aaddfde8131e6118252da853df908ac20be27cb942149f5346a6fd"} Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.352351 4710 generic.go:334] "Generic (PLEG): container finished" podID="95fd8509-97bd-4d02-87d5-3593b426ef44" containerID="d65b17eb3e85cc70621442c40091495212f30307ab3d3459ae510430e68627de" exitCode=0 Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.352420 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkl" event={"ID":"95fd8509-97bd-4d02-87d5-3593b426ef44","Type":"ContainerDied","Data":"d65b17eb3e85cc70621442c40091495212f30307ab3d3459ae510430e68627de"} Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.352443 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkl" event={"ID":"95fd8509-97bd-4d02-87d5-3593b426ef44","Type":"ContainerStarted","Data":"f9fb942bee1854f29ff108ff966373b4cbd5e6cbe711dec9924b33e2217353d7"} Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.355270 4710 generic.go:334] "Generic (PLEG): container finished" podID="32d30eea-067e-4b8c-8bd4-a6dd02440a71" containerID="961dbfb35c818570f81a536fea4b3638d703550d63ff387ef16c1ec0080db1e4" exitCode=0 Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.355306 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5srj" event={"ID":"32d30eea-067e-4b8c-8bd4-a6dd02440a71","Type":"ContainerDied","Data":"961dbfb35c818570f81a536fea4b3638d703550d63ff387ef16c1ec0080db1e4"} Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.380065 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tdbpv" podStartSLOduration=2.386237624 podStartE2EDuration="5.38004514s" podCreationTimestamp="2025-11-28 17:04:04 +0000 UTC" firstStartedPulling="2025-11-28 17:04:05.300727219 +0000 UTC m=+334.559027264" lastFinishedPulling="2025-11-28 17:04:08.294534715 +0000 UTC m=+337.552834780" observedRunningTime="2025-11-28 17:04:09.374183708 +0000 UTC m=+338.632483753" watchObservedRunningTime="2025-11-28 17:04:09.38004514 +0000 UTC m=+338.638345195" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.409633 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbcd726-3ba8-41eb-9b6c-9648483ec935-catalog-content\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.409720 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbcd726-3ba8-41eb-9b6c-9648483ec935-utilities\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.409843 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9znp\" (UniqueName: \"kubernetes.io/projected/6fbcd726-3ba8-41eb-9b6c-9648483ec935-kube-api-access-d9znp\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.410333 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fbcd726-3ba8-41eb-9b6c-9648483ec935-catalog-content\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.410431 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fbcd726-3ba8-41eb-9b6c-9648483ec935-utilities\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.431545 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9znp\" (UniqueName: \"kubernetes.io/projected/6fbcd726-3ba8-41eb-9b6c-9648483ec935-kube-api-access-d9znp\") pod \"redhat-marketplace-dq7q7\" (UID: \"6fbcd726-3ba8-41eb-9b6c-9648483ec935\") " pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.521554 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.522159 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.565950 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.617175 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:09 crc kubenswrapper[4710]: I1128 17:04:09.984844 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dq7q7"] Nov 28 17:04:09 crc kubenswrapper[4710]: W1128 17:04:09.990312 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fbcd726_3ba8_41eb_9b6c_9648483ec935.slice/crio-36e77bae9129d7d1421fc023439f4219f8bcf4c89d8e46035b5a39e35d4ec1a0 WatchSource:0}: Error finding container 36e77bae9129d7d1421fc023439f4219f8bcf4c89d8e46035b5a39e35d4ec1a0: Status 404 returned error can't find the container with id 36e77bae9129d7d1421fc023439f4219f8bcf4c89d8e46035b5a39e35d4ec1a0 Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.376313 4710 generic.go:334] "Generic (PLEG): container finished" podID="6fbcd726-3ba8-41eb-9b6c-9648483ec935" containerID="21b17daaaf155ff03c74190ab490ccea5fdf4d15d0418abdf6a26e77e3991394" exitCode=0 Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.376396 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq7q7" event={"ID":"6fbcd726-3ba8-41eb-9b6c-9648483ec935","Type":"ContainerDied","Data":"21b17daaaf155ff03c74190ab490ccea5fdf4d15d0418abdf6a26e77e3991394"} Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.376438 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq7q7" event={"ID":"6fbcd726-3ba8-41eb-9b6c-9648483ec935","Type":"ContainerStarted","Data":"36e77bae9129d7d1421fc023439f4219f8bcf4c89d8e46035b5a39e35d4ec1a0"} Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.395157 4710 generic.go:334] "Generic (PLEG): container finished" podID="41bc6f92-7755-4ed7-94ab-b21b82284a9f" containerID="fb6652e914166c8c6f613657aaf21fe93feb20f4cdd36d6aee1a584bd38f6b83" exitCode=0 Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.395811 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-679d948996-jjcpd"] Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.395867 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l7l6" event={"ID":"41bc6f92-7755-4ed7-94ab-b21b82284a9f","Type":"ContainerDied","Data":"fb6652e914166c8c6f613657aaf21fe93feb20f4cdd36d6aee1a584bd38f6b83"} Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.396293 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" podUID="0ea3c254-1948-428e-a9af-4390bb516cea" containerName="controller-manager" containerID="cri-o://51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b" gracePeriod=30 Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.437872 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw"] Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.438170 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" podUID="b7d25392-1fae-413d-ad03-20f53f1ac112" containerName="route-controller-manager" containerID="cri-o://749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d" gracePeriod=30 Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.463854 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npsxw"] Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.465004 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.472873 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npsxw"] Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.497251 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k8rlt" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.525046 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de54a7a-65ab-4560-a62e-0fb531a0ca92-catalog-content\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.525113 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b9x5\" (UniqueName: \"kubernetes.io/projected/4de54a7a-65ab-4560-a62e-0fb531a0ca92-kube-api-access-5b9x5\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.525149 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de54a7a-65ab-4560-a62e-0fb531a0ca92-utilities\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.625831 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de54a7a-65ab-4560-a62e-0fb531a0ca92-utilities\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.625968 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de54a7a-65ab-4560-a62e-0fb531a0ca92-catalog-content\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.626015 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b9x5\" (UniqueName: \"kubernetes.io/projected/4de54a7a-65ab-4560-a62e-0fb531a0ca92-kube-api-access-5b9x5\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.626493 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4de54a7a-65ab-4560-a62e-0fb531a0ca92-utilities\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.626578 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4de54a7a-65ab-4560-a62e-0fb531a0ca92-catalog-content\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.649350 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b9x5\" (UniqueName: \"kubernetes.io/projected/4de54a7a-65ab-4560-a62e-0fb531a0ca92-kube-api-access-5b9x5\") pod \"redhat-marketplace-npsxw\" (UID: \"4de54a7a-65ab-4560-a62e-0fb531a0ca92\") " pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.659718 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.952228 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:04:10 crc kubenswrapper[4710]: I1128 17:04:10.979138 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032364 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nhdv\" (UniqueName: \"kubernetes.io/projected/0ea3c254-1948-428e-a9af-4390bb516cea-kube-api-access-5nhdv\") pod \"0ea3c254-1948-428e-a9af-4390bb516cea\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032400 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-config\") pod \"b7d25392-1fae-413d-ad03-20f53f1ac112\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032433 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-client-ca\") pod \"b7d25392-1fae-413d-ad03-20f53f1ac112\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032454 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-config\") pod \"0ea3c254-1948-428e-a9af-4390bb516cea\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032488 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czgzj\" (UniqueName: \"kubernetes.io/projected/b7d25392-1fae-413d-ad03-20f53f1ac112-kube-api-access-czgzj\") pod \"b7d25392-1fae-413d-ad03-20f53f1ac112\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032521 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-client-ca\") pod \"0ea3c254-1948-428e-a9af-4390bb516cea\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032538 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-proxy-ca-bundles\") pod \"0ea3c254-1948-428e-a9af-4390bb516cea\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032552 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea3c254-1948-428e-a9af-4390bb516cea-serving-cert\") pod \"0ea3c254-1948-428e-a9af-4390bb516cea\" (UID: \"0ea3c254-1948-428e-a9af-4390bb516cea\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.032578 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d25392-1fae-413d-ad03-20f53f1ac112-serving-cert\") pod \"b7d25392-1fae-413d-ad03-20f53f1ac112\" (UID: \"b7d25392-1fae-413d-ad03-20f53f1ac112\") " Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.033218 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-client-ca" (OuterVolumeSpecName: "client-ca") pod "b7d25392-1fae-413d-ad03-20f53f1ac112" (UID: "b7d25392-1fae-413d-ad03-20f53f1ac112"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.033499 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-client-ca" (OuterVolumeSpecName: "client-ca") pod "0ea3c254-1948-428e-a9af-4390bb516cea" (UID: "0ea3c254-1948-428e-a9af-4390bb516cea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.033294 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-config" (OuterVolumeSpecName: "config") pod "b7d25392-1fae-413d-ad03-20f53f1ac112" (UID: "b7d25392-1fae-413d-ad03-20f53f1ac112"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.034002 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-config" (OuterVolumeSpecName: "config") pod "0ea3c254-1948-428e-a9af-4390bb516cea" (UID: "0ea3c254-1948-428e-a9af-4390bb516cea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.034019 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0ea3c254-1948-428e-a9af-4390bb516cea" (UID: "0ea3c254-1948-428e-a9af-4390bb516cea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.036801 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ea3c254-1948-428e-a9af-4390bb516cea-kube-api-access-5nhdv" (OuterVolumeSpecName: "kube-api-access-5nhdv") pod "0ea3c254-1948-428e-a9af-4390bb516cea" (UID: "0ea3c254-1948-428e-a9af-4390bb516cea"). InnerVolumeSpecName "kube-api-access-5nhdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.036841 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7d25392-1fae-413d-ad03-20f53f1ac112-kube-api-access-czgzj" (OuterVolumeSpecName: "kube-api-access-czgzj") pod "b7d25392-1fae-413d-ad03-20f53f1ac112" (UID: "b7d25392-1fae-413d-ad03-20f53f1ac112"). InnerVolumeSpecName "kube-api-access-czgzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.036992 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7d25392-1fae-413d-ad03-20f53f1ac112-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b7d25392-1fae-413d-ad03-20f53f1ac112" (UID: "b7d25392-1fae-413d-ad03-20f53f1ac112"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.037438 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ea3c254-1948-428e-a9af-4390bb516cea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0ea3c254-1948-428e-a9af-4390bb516cea" (UID: "0ea3c254-1948-428e-a9af-4390bb516cea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134160 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nhdv\" (UniqueName: \"kubernetes.io/projected/0ea3c254-1948-428e-a9af-4390bb516cea-kube-api-access-5nhdv\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134194 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134205 4710 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7d25392-1fae-413d-ad03-20f53f1ac112-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134215 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134224 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czgzj\" (UniqueName: \"kubernetes.io/projected/b7d25392-1fae-413d-ad03-20f53f1ac112-kube-api-access-czgzj\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134232 4710 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-client-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134240 4710 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0ea3c254-1948-428e-a9af-4390bb516cea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134248 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ea3c254-1948-428e-a9af-4390bb516cea-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.134256 4710 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7d25392-1fae-413d-ad03-20f53f1ac112-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.171632 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npsxw"] Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.402381 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5l7l6" event={"ID":"41bc6f92-7755-4ed7-94ab-b21b82284a9f","Type":"ContainerStarted","Data":"01558c24fb1e6f2071745996472d88da8194e136e6be2e2320a89ae91f79e256"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.403638 4710 generic.go:334] "Generic (PLEG): container finished" podID="4de54a7a-65ab-4560-a62e-0fb531a0ca92" containerID="25664ad6da185b181c50a3a522dcd5ac15013152e59985b471db4c24c260cf9b" exitCode=0 Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.403709 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npsxw" event={"ID":"4de54a7a-65ab-4560-a62e-0fb531a0ca92","Type":"ContainerDied","Data":"25664ad6da185b181c50a3a522dcd5ac15013152e59985b471db4c24c260cf9b"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.403745 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npsxw" event={"ID":"4de54a7a-65ab-4560-a62e-0fb531a0ca92","Type":"ContainerStarted","Data":"9d14deaebe56286b6e13d434b76c6fac85f172c63d2473195ae9e3123dda35b8"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.405857 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq7q7" event={"ID":"6fbcd726-3ba8-41eb-9b6c-9648483ec935","Type":"ContainerStarted","Data":"8d287fdf7084b5053211e99284fb41486021767cc919d07728022a601962681b"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.407992 4710 generic.go:334] "Generic (PLEG): container finished" podID="b7d25392-1fae-413d-ad03-20f53f1ac112" containerID="749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d" exitCode=0 Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.408056 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" event={"ID":"b7d25392-1fae-413d-ad03-20f53f1ac112","Type":"ContainerDied","Data":"749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.408076 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" event={"ID":"b7d25392-1fae-413d-ad03-20f53f1ac112","Type":"ContainerDied","Data":"d50c7c2af70f992864eac8deaea9a6384b33dd4aa01d9b5b5d26b82253334950"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.408093 4710 scope.go:117] "RemoveContainer" containerID="749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.408217 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.411704 4710 generic.go:334] "Generic (PLEG): container finished" podID="0ea3c254-1948-428e-a9af-4390bb516cea" containerID="51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b" exitCode=0 Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.411776 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.411836 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" event={"ID":"0ea3c254-1948-428e-a9af-4390bb516cea","Type":"ContainerDied","Data":"51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.411859 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-679d948996-jjcpd" event={"ID":"0ea3c254-1948-428e-a9af-4390bb516cea","Type":"ContainerDied","Data":"355afcac37a985192f4725f148448a3e58a8b0807329df744d59ffcde133fc5c"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.420178 4710 generic.go:334] "Generic (PLEG): container finished" podID="95fd8509-97bd-4d02-87d5-3593b426ef44" containerID="b901c85c95cecca1f7c9efba30a8b8dfd894bf5de4167b45172a6a98938a0bf8" exitCode=0 Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.420280 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkl" event={"ID":"95fd8509-97bd-4d02-87d5-3593b426ef44","Type":"ContainerDied","Data":"b901c85c95cecca1f7c9efba30a8b8dfd894bf5de4167b45172a6a98938a0bf8"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.427223 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5l7l6" podStartSLOduration=2.633210732 podStartE2EDuration="5.427203805s" podCreationTimestamp="2025-11-28 17:04:06 +0000 UTC" firstStartedPulling="2025-11-28 17:04:08.343369195 +0000 UTC m=+337.601669240" lastFinishedPulling="2025-11-28 17:04:11.137362268 +0000 UTC m=+340.395662313" observedRunningTime="2025-11-28 17:04:11.424054447 +0000 UTC m=+340.682354502" watchObservedRunningTime="2025-11-28 17:04:11.427203805 +0000 UTC m=+340.685503840" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.428099 4710 scope.go:117] "RemoveContainer" containerID="749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d" Nov 28 17:04:11 crc kubenswrapper[4710]: E1128 17:04:11.428722 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d\": container with ID starting with 749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d not found: ID does not exist" containerID="749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.428800 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d"} err="failed to get container status \"749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d\": rpc error: code = NotFound desc = could not find container \"749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d\": container with ID starting with 749647f925c9c74d6379c8626a97f5e85a1fb07817f1f9c90132d9f27203e47d not found: ID does not exist" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.428837 4710 scope.go:117] "RemoveContainer" containerID="51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.433445 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m5srj" event={"ID":"32d30eea-067e-4b8c-8bd4-a6dd02440a71","Type":"ContainerStarted","Data":"6b241582711a6c659ea504fa1ebff18d6c87e5f6a6a456abfff52cae730971dd"} Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.446207 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-679d948996-jjcpd"] Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.451771 4710 scope.go:117] "RemoveContainer" containerID="51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b" Nov 28 17:04:11 crc kubenswrapper[4710]: E1128 17:04:11.452229 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b\": container with ID starting with 51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b not found: ID does not exist" containerID="51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.452265 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b"} err="failed to get container status \"51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b\": rpc error: code = NotFound desc = could not find container \"51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b\": container with ID starting with 51a0f698b3e9d37f47be45c088a3ea9a658108e614a3b81fc7a20e269858768b not found: ID does not exist" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.452443 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-679d948996-jjcpd"] Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.471738 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw"] Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.479839 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-dc5d576d8-p6lhw"] Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.522726 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m5srj" podStartSLOduration=3.492069876 podStartE2EDuration="6.522705859s" podCreationTimestamp="2025-11-28 17:04:05 +0000 UTC" firstStartedPulling="2025-11-28 17:04:07.333404872 +0000 UTC m=+336.591704917" lastFinishedPulling="2025-11-28 17:04:10.364040855 +0000 UTC m=+339.622340900" observedRunningTime="2025-11-28 17:04:11.5198474 +0000 UTC m=+340.778147455" watchObservedRunningTime="2025-11-28 17:04:11.522705859 +0000 UTC m=+340.781005904" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.607948 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wdmpr"] Nov 28 17:04:11 crc kubenswrapper[4710]: E1128 17:04:11.608209 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7d25392-1fae-413d-ad03-20f53f1ac112" containerName="route-controller-manager" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.608225 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7d25392-1fae-413d-ad03-20f53f1ac112" containerName="route-controller-manager" Nov 28 17:04:11 crc kubenswrapper[4710]: E1128 17:04:11.608239 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ea3c254-1948-428e-a9af-4390bb516cea" containerName="controller-manager" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.608248 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ea3c254-1948-428e-a9af-4390bb516cea" containerName="controller-manager" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.608412 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7d25392-1fae-413d-ad03-20f53f1ac112" containerName="route-controller-manager" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.608435 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ea3c254-1948-428e-a9af-4390bb516cea" containerName="controller-manager" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.609601 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.615867 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdmpr"] Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.741517 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e6bdd6-6ee2-4793-b202-d0297c3843f1-catalog-content\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.741628 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgmh6\" (UniqueName: \"kubernetes.io/projected/41e6bdd6-6ee2-4793-b202-d0297c3843f1-kube-api-access-vgmh6\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.741653 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e6bdd6-6ee2-4793-b202-d0297c3843f1-utilities\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.843051 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgmh6\" (UniqueName: \"kubernetes.io/projected/41e6bdd6-6ee2-4793-b202-d0297c3843f1-kube-api-access-vgmh6\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.843125 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e6bdd6-6ee2-4793-b202-d0297c3843f1-utilities\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.843193 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e6bdd6-6ee2-4793-b202-d0297c3843f1-catalog-content\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.843651 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41e6bdd6-6ee2-4793-b202-d0297c3843f1-utilities\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.843683 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41e6bdd6-6ee2-4793-b202-d0297c3843f1-catalog-content\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.861396 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgmh6\" (UniqueName: \"kubernetes.io/projected/41e6bdd6-6ee2-4793-b202-d0297c3843f1-kube-api-access-vgmh6\") pod \"redhat-marketplace-wdmpr\" (UID: \"41e6bdd6-6ee2-4793-b202-d0297c3843f1\") " pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:11 crc kubenswrapper[4710]: I1128 17:04:11.928327 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.235279 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q"] Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.236365 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.246982 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.247037 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.246982 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.247808 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.247877 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.247976 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.253566 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64798f646d-fsv2m"] Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.254871 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.260899 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.261061 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.263181 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.263581 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.263723 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.268391 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.281747 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64798f646d-fsv2m"] Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.282252 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.285245 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q"] Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.348445 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wdmpr"] Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.351668 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmwxp\" (UniqueName: \"kubernetes.io/projected/a058abcd-b685-4e58-8084-a33210e7b833-kube-api-access-bmwxp\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.351714 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a34aa75-5b89-4d7f-a45e-068f8f03c254-client-ca\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.351739 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkt4z\" (UniqueName: \"kubernetes.io/projected/7a34aa75-5b89-4d7f-a45e-068f8f03c254-kube-api-access-kkt4z\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.351813 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a058abcd-b685-4e58-8084-a33210e7b833-serving-cert\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.351868 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-config\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.351999 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-client-ca\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.352057 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a34aa75-5b89-4d7f-a45e-068f8f03c254-serving-cert\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.352277 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a34aa75-5b89-4d7f-a45e-068f8f03c254-config\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.352313 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-proxy-ca-bundles\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: W1128 17:04:12.353266 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41e6bdd6_6ee2_4793_b202_d0297c3843f1.slice/crio-294567dc6b4a071a71fdb2b53914dafd745c0e9ab87bd1994db16b773fd120f1 WatchSource:0}: Error finding container 294567dc6b4a071a71fdb2b53914dafd745c0e9ab87bd1994db16b773fd120f1: Status 404 returned error can't find the container with id 294567dc6b4a071a71fdb2b53914dafd745c0e9ab87bd1994db16b773fd120f1 Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.439851 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdmpr" event={"ID":"41e6bdd6-6ee2-4793-b202-d0297c3843f1","Type":"ContainerStarted","Data":"294567dc6b4a071a71fdb2b53914dafd745c0e9ab87bd1994db16b773fd120f1"} Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.447383 4710 generic.go:334] "Generic (PLEG): container finished" podID="6fbcd726-3ba8-41eb-9b6c-9648483ec935" containerID="8d287fdf7084b5053211e99284fb41486021767cc919d07728022a601962681b" exitCode=0 Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.447437 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq7q7" event={"ID":"6fbcd726-3ba8-41eb-9b6c-9648483ec935","Type":"ContainerDied","Data":"8d287fdf7084b5053211e99284fb41486021767cc919d07728022a601962681b"} Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453126 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a34aa75-5b89-4d7f-a45e-068f8f03c254-config\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453160 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-proxy-ca-bundles\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453190 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmwxp\" (UniqueName: \"kubernetes.io/projected/a058abcd-b685-4e58-8084-a33210e7b833-kube-api-access-bmwxp\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453214 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a34aa75-5b89-4d7f-a45e-068f8f03c254-client-ca\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453232 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkt4z\" (UniqueName: \"kubernetes.io/projected/7a34aa75-5b89-4d7f-a45e-068f8f03c254-kube-api-access-kkt4z\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453249 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a058abcd-b685-4e58-8084-a33210e7b833-serving-cert\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453297 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-config\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453335 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a34aa75-5b89-4d7f-a45e-068f8f03c254-serving-cert\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.453351 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-client-ca\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.454356 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-proxy-ca-bundles\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.454406 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a34aa75-5b89-4d7f-a45e-068f8f03c254-client-ca\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.454585 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-client-ca\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.454639 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a058abcd-b685-4e58-8084-a33210e7b833-config\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.454683 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a34aa75-5b89-4d7f-a45e-068f8f03c254-config\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.457976 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a058abcd-b685-4e58-8084-a33210e7b833-serving-cert\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.459578 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a34aa75-5b89-4d7f-a45e-068f8f03c254-serving-cert\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.467897 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmwxp\" (UniqueName: \"kubernetes.io/projected/a058abcd-b685-4e58-8084-a33210e7b833-kube-api-access-bmwxp\") pod \"controller-manager-64798f646d-fsv2m\" (UID: \"a058abcd-b685-4e58-8084-a33210e7b833\") " pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.469207 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkt4z\" (UniqueName: \"kubernetes.io/projected/7a34aa75-5b89-4d7f-a45e-068f8f03c254-kube-api-access-kkt4z\") pod \"route-controller-manager-7b9699fbd9-mrf6q\" (UID: \"7a34aa75-5b89-4d7f-a45e-068f8f03c254\") " pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.558687 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.586931 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.818521 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gj4kl"] Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.819924 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.838826 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gj4kl"] Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.845520 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q"] Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.959105 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r94n\" (UniqueName: \"kubernetes.io/projected/0b982e59-f24b-48b7-b0cf-cd196c35c646-kube-api-access-9r94n\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.959146 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b982e59-f24b-48b7-b0cf-cd196c35c646-utilities\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:12 crc kubenswrapper[4710]: I1128 17:04:12.959178 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b982e59-f24b-48b7-b0cf-cd196c35c646-catalog-content\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.060284 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r94n\" (UniqueName: \"kubernetes.io/projected/0b982e59-f24b-48b7-b0cf-cd196c35c646-kube-api-access-9r94n\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.060340 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b982e59-f24b-48b7-b0cf-cd196c35c646-utilities\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.060381 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b982e59-f24b-48b7-b0cf-cd196c35c646-catalog-content\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.060925 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b982e59-f24b-48b7-b0cf-cd196c35c646-catalog-content\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.061460 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b982e59-f24b-48b7-b0cf-cd196c35c646-utilities\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.100169 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r94n\" (UniqueName: \"kubernetes.io/projected/0b982e59-f24b-48b7-b0cf-cd196c35c646-kube-api-access-9r94n\") pod \"redhat-marketplace-gj4kl\" (UID: \"0b982e59-f24b-48b7-b0cf-cd196c35c646\") " pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.147744 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ea3c254-1948-428e-a9af-4390bb516cea" path="/var/lib/kubelet/pods/0ea3c254-1948-428e-a9af-4390bb516cea/volumes" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.148945 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7d25392-1fae-413d-ad03-20f53f1ac112" path="/var/lib/kubelet/pods/b7d25392-1fae-413d-ad03-20f53f1ac112/volumes" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.169362 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64798f646d-fsv2m"] Nov 28 17:04:13 crc kubenswrapper[4710]: W1128 17:04:13.174064 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda058abcd_b685_4e58_8084_a33210e7b833.slice/crio-3a6a769d2d31b75eed989f1234b65e22ec07d6fbd339fc0322ea09abaa4f6820 WatchSource:0}: Error finding container 3a6a769d2d31b75eed989f1234b65e22ec07d6fbd339fc0322ea09abaa4f6820: Status 404 returned error can't find the container with id 3a6a769d2d31b75eed989f1234b65e22ec07d6fbd339fc0322ea09abaa4f6820 Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.194711 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.335917 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.336887 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.386827 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.454620 4710 generic.go:334] "Generic (PLEG): container finished" podID="4de54a7a-65ab-4560-a62e-0fb531a0ca92" containerID="6a6f3ec7b2f81d2a1702f5ef6eb01d9edc742aa9bf105c80b1f97cf894e4aa4d" exitCode=0 Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.454678 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npsxw" event={"ID":"4de54a7a-65ab-4560-a62e-0fb531a0ca92","Type":"ContainerDied","Data":"6a6f3ec7b2f81d2a1702f5ef6eb01d9edc742aa9bf105c80b1f97cf894e4aa4d"} Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.456550 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" event={"ID":"a058abcd-b685-4e58-8084-a33210e7b833","Type":"ContainerStarted","Data":"3a6a769d2d31b75eed989f1234b65e22ec07d6fbd339fc0322ea09abaa4f6820"} Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.458333 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" event={"ID":"7a34aa75-5b89-4d7f-a45e-068f8f03c254","Type":"ContainerStarted","Data":"efb9f3766af173c6f79a90a334efa161b219724d3f599ecd0a05a005e7d6f4e4"} Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.458367 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" event={"ID":"7a34aa75-5b89-4d7f-a45e-068f8f03c254","Type":"ContainerStarted","Data":"be8ca9e879c5be08cc99b1fb55918a2aac93f9558fd84e50aa6c5c687ed54fb7"} Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.458573 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.462278 4710 generic.go:334] "Generic (PLEG): container finished" podID="41e6bdd6-6ee2-4793-b202-d0297c3843f1" containerID="876e4fb35c08a5f8c08247007efb1e4bd72135a49eba349c9aa6633098b84ef3" exitCode=0 Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.462351 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdmpr" event={"ID":"41e6bdd6-6ee2-4793-b202-d0297c3843f1","Type":"ContainerDied","Data":"876e4fb35c08a5f8c08247007efb1e4bd72135a49eba349c9aa6633098b84ef3"} Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.462611 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.465236 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-khfkl" event={"ID":"95fd8509-97bd-4d02-87d5-3593b426ef44","Type":"ContainerStarted","Data":"1afe41a98125471a631d2793a23f79fe29d7e56087bc78b881bb9ea7a2aede5f"} Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.550293 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vdv6c" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.551498 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b9699fbd9-mrf6q" podStartSLOduration=3.551478311 podStartE2EDuration="3.551478311s" podCreationTimestamp="2025-11-28 17:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:04:13.511035382 +0000 UTC m=+342.769335427" watchObservedRunningTime="2025-11-28 17:04:13.551478311 +0000 UTC m=+342.809778356" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.574239 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-khfkl" podStartSLOduration=3.218664388 podStartE2EDuration="6.574224209s" podCreationTimestamp="2025-11-28 17:04:07 +0000 UTC" firstStartedPulling="2025-11-28 17:04:09.353595797 +0000 UTC m=+338.611895842" lastFinishedPulling="2025-11-28 17:04:12.709155618 +0000 UTC m=+341.967455663" observedRunningTime="2025-11-28 17:04:13.550042316 +0000 UTC m=+342.808342361" watchObservedRunningTime="2025-11-28 17:04:13.574224209 +0000 UTC m=+342.832524254" Nov 28 17:04:13 crc kubenswrapper[4710]: I1128 17:04:13.947975 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gj4kl"] Nov 28 17:04:13 crc kubenswrapper[4710]: W1128 17:04:13.959323 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b982e59_f24b_48b7_b0cf_cd196c35c646.slice/crio-a42431cf32ab31c5f21adc8cbe36068fc464a0aefc5db6fc7982444da6cc6513 WatchSource:0}: Error finding container a42431cf32ab31c5f21adc8cbe36068fc464a0aefc5db6fc7982444da6cc6513: Status 404 returned error can't find the container with id a42431cf32ab31c5f21adc8cbe36068fc464a0aefc5db6fc7982444da6cc6513 Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.010087 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pllxk"] Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.011413 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.022672 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pllxk"] Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.076518 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea56f67-6506-451e-965f-3ef66a34d8e7-catalog-content\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.076942 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea56f67-6506-451e-965f-3ef66a34d8e7-utilities\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.076983 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js82v\" (UniqueName: \"kubernetes.io/projected/dea56f67-6506-451e-965f-3ef66a34d8e7-kube-api-access-js82v\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.178079 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea56f67-6506-451e-965f-3ef66a34d8e7-catalog-content\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.178224 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea56f67-6506-451e-965f-3ef66a34d8e7-utilities\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.178253 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js82v\" (UniqueName: \"kubernetes.io/projected/dea56f67-6506-451e-965f-3ef66a34d8e7-kube-api-access-js82v\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.178619 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea56f67-6506-451e-965f-3ef66a34d8e7-catalog-content\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.178779 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea56f67-6506-451e-965f-3ef66a34d8e7-utilities\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.200875 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js82v\" (UniqueName: \"kubernetes.io/projected/dea56f67-6506-451e-965f-3ef66a34d8e7-kube-api-access-js82v\") pod \"redhat-marketplace-pllxk\" (UID: \"dea56f67-6506-451e-965f-3ef66a34d8e7\") " pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.332015 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.472165 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npsxw" event={"ID":"4de54a7a-65ab-4560-a62e-0fb531a0ca92","Type":"ContainerStarted","Data":"e462ca5bda52e8a72835144f04b0d7ca10f10f1264e49b7b39794116cfc16cbc"} Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.490548 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dq7q7" event={"ID":"6fbcd726-3ba8-41eb-9b6c-9648483ec935","Type":"ContainerStarted","Data":"4f20760395c8af8b2f3a451451d1f553ee8817c1a1259045196e11b1f770f2a7"} Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.505384 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" event={"ID":"a058abcd-b685-4e58-8084-a33210e7b833","Type":"ContainerStarted","Data":"b49b5d2438774bc1efa833b602cddba85ab84175720f976d67b0d421bc70c134"} Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.506324 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.514484 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npsxw" podStartSLOduration=1.79022829 podStartE2EDuration="4.51445812s" podCreationTimestamp="2025-11-28 17:04:10 +0000 UTC" firstStartedPulling="2025-11-28 17:04:11.40488691 +0000 UTC m=+340.663186955" lastFinishedPulling="2025-11-28 17:04:14.12911674 +0000 UTC m=+343.387416785" observedRunningTime="2025-11-28 17:04:14.507503814 +0000 UTC m=+343.765803879" watchObservedRunningTime="2025-11-28 17:04:14.51445812 +0000 UTC m=+343.772758165" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.521504 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.526673 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdmpr" event={"ID":"41e6bdd6-6ee2-4793-b202-d0297c3843f1","Type":"ContainerStarted","Data":"e5a12a592401ef6e011f49750348bd74379f4a5b43ece86f48c3f8eec9d25189"} Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.535489 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.535518 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.541015 4710 generic.go:334] "Generic (PLEG): container finished" podID="0b982e59-f24b-48b7-b0cf-cd196c35c646" containerID="cabaac484443a0eb58d6f1ebc3eff2dd4e68d65c0b2c68435822bcb91ab9ad3e" exitCode=0 Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.541919 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj4kl" event={"ID":"0b982e59-f24b-48b7-b0cf-cd196c35c646","Type":"ContainerDied","Data":"cabaac484443a0eb58d6f1ebc3eff2dd4e68d65c0b2c68435822bcb91ab9ad3e"} Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.541943 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj4kl" event={"ID":"0b982e59-f24b-48b7-b0cf-cd196c35c646","Type":"ContainerStarted","Data":"a42431cf32ab31c5f21adc8cbe36068fc464a0aefc5db6fc7982444da6cc6513"} Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.555060 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64798f646d-fsv2m" podStartSLOduration=4.555041275 podStartE2EDuration="4.555041275s" podCreationTimestamp="2025-11-28 17:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:04:14.53529167 +0000 UTC m=+343.793591715" watchObservedRunningTime="2025-11-28 17:04:14.555041275 +0000 UTC m=+343.813341320" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.557114 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dq7q7" podStartSLOduration=2.5070852219999997 podStartE2EDuration="5.557106589s" podCreationTimestamp="2025-11-28 17:04:09 +0000 UTC" firstStartedPulling="2025-11-28 17:04:10.387239527 +0000 UTC m=+339.645539572" lastFinishedPulling="2025-11-28 17:04:13.437260894 +0000 UTC m=+342.695560939" observedRunningTime="2025-11-28 17:04:14.554537049 +0000 UTC m=+343.812837114" watchObservedRunningTime="2025-11-28 17:04:14.557106589 +0000 UTC m=+343.815406624" Nov 28 17:04:14 crc kubenswrapper[4710]: I1128 17:04:14.629464 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:14.922942 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pllxk"] Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.205746 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-d9z7q"] Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.206856 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.217804 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d9z7q"] Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.299286 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-catalog-content\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.299338 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmsfk\" (UniqueName: \"kubernetes.io/projected/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-kube-api-access-mmsfk\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.299375 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-utilities\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.400431 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-catalog-content\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.400473 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmsfk\" (UniqueName: \"kubernetes.io/projected/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-kube-api-access-mmsfk\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.400502 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-utilities\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.401314 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-utilities\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.401332 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-catalog-content\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.421387 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmsfk\" (UniqueName: \"kubernetes.io/projected/aeef35e5-e5cb-4fb3-af00-f5adca01d8e6-kube-api-access-mmsfk\") pod \"redhat-marketplace-d9z7q\" (UID: \"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6\") " pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.521935 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.556554 4710 generic.go:334] "Generic (PLEG): container finished" podID="41e6bdd6-6ee2-4793-b202-d0297c3843f1" containerID="e5a12a592401ef6e011f49750348bd74379f4a5b43ece86f48c3f8eec9d25189" exitCode=0 Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.556878 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdmpr" event={"ID":"41e6bdd6-6ee2-4793-b202-d0297c3843f1","Type":"ContainerDied","Data":"e5a12a592401ef6e011f49750348bd74379f4a5b43ece86f48c3f8eec9d25189"} Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.558927 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pllxk" event={"ID":"dea56f67-6506-451e-965f-3ef66a34d8e7","Type":"ContainerStarted","Data":"0b894c462dc1c2c44898773cfefd85c86e8a21c79da0271ff9695ee543134e9a"} Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.630965 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tdbpv" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.979694 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:15.980132 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:16.025389 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:16.566624 4710 generic.go:334] "Generic (PLEG): container finished" podID="dea56f67-6506-451e-965f-3ef66a34d8e7" containerID="4200a337ecb782b0f02d3c2633fc095f9cdf8e82d4bf5297dca6fe821f2eeae4" exitCode=0 Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:16.566737 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pllxk" event={"ID":"dea56f67-6506-451e-965f-3ef66a34d8e7","Type":"ContainerDied","Data":"4200a337ecb782b0f02d3c2633fc095f9cdf8e82d4bf5297dca6fe821f2eeae4"} Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:16.576868 4710 generic.go:334] "Generic (PLEG): container finished" podID="0b982e59-f24b-48b7-b0cf-cd196c35c646" containerID="3a1b6c409a9813fd07cec0a99e121e195816cd928e25440e6aba26b7409b649a" exitCode=0 Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:16.577056 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj4kl" event={"ID":"0b982e59-f24b-48b7-b0cf-cd196c35c646","Type":"ContainerDied","Data":"3a1b6c409a9813fd07cec0a99e121e195816cd928e25440e6aba26b7409b649a"} Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:16.619787 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m5srj" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:17.122819 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:17.122936 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:17.179849 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:17.637817 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5l7l6" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:18.347126 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:18.347569 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:18.395737 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:18 crc kubenswrapper[4710]: I1128 17:04:18.633138 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-khfkl" Nov 28 17:04:19 crc kubenswrapper[4710]: I1128 17:04:19.225092 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-d9z7q"] Nov 28 17:04:19 crc kubenswrapper[4710]: I1128 17:04:19.599720 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9z7q" event={"ID":"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6","Type":"ContainerStarted","Data":"19a5b14301a5e4655ebf463d71be4fc21fb099f24c700963d3a2304da5406fce"} Nov 28 17:04:19 crc kubenswrapper[4710]: I1128 17:04:19.602607 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wdmpr" event={"ID":"41e6bdd6-6ee2-4793-b202-d0297c3843f1","Type":"ContainerStarted","Data":"6ab23536d1d3caba7722064d112f0a0367d14b9f58724334de2095a85bcd99b2"} Nov 28 17:04:19 crc kubenswrapper[4710]: I1128 17:04:19.605363 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gj4kl" event={"ID":"0b982e59-f24b-48b7-b0cf-cd196c35c646","Type":"ContainerStarted","Data":"a95ca74f9212e689ca4fda087410f241e2199fac284c8c3f151933871da9aaaa"} Nov 28 17:04:19 crc kubenswrapper[4710]: I1128 17:04:19.618919 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:19 crc kubenswrapper[4710]: I1128 17:04:19.618962 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:19 crc kubenswrapper[4710]: I1128 17:04:19.621923 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wdmpr" podStartSLOduration=3.205397875 podStartE2EDuration="8.621903751s" podCreationTimestamp="2025-11-28 17:04:11 +0000 UTC" firstStartedPulling="2025-11-28 17:04:13.469310931 +0000 UTC m=+342.727610976" lastFinishedPulling="2025-11-28 17:04:18.885816807 +0000 UTC m=+348.144116852" observedRunningTime="2025-11-28 17:04:19.619150675 +0000 UTC m=+348.877450720" watchObservedRunningTime="2025-11-28 17:04:19.621903751 +0000 UTC m=+348.880203796" Nov 28 17:04:19 crc kubenswrapper[4710]: I1128 17:04:19.661858 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:20 crc kubenswrapper[4710]: I1128 17:04:20.610908 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9z7q" event={"ID":"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6","Type":"ContainerStarted","Data":"574b7c3e3d740359464f10bad892124f062f0bb6dc63eca88eb69000a30f1721"} Nov 28 17:04:20 crc kubenswrapper[4710]: I1128 17:04:20.641065 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gj4kl" podStartSLOduration=4.082638349 podStartE2EDuration="8.64105013s" podCreationTimestamp="2025-11-28 17:04:12 +0000 UTC" firstStartedPulling="2025-11-28 17:04:14.546896251 +0000 UTC m=+343.805196286" lastFinishedPulling="2025-11-28 17:04:19.105308022 +0000 UTC m=+348.363608067" observedRunningTime="2025-11-28 17:04:20.636968722 +0000 UTC m=+349.895268767" watchObservedRunningTime="2025-11-28 17:04:20.64105013 +0000 UTC m=+349.899350175" Nov 28 17:04:20 crc kubenswrapper[4710]: I1128 17:04:20.660257 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:20 crc kubenswrapper[4710]: I1128 17:04:20.660311 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:20 crc kubenswrapper[4710]: I1128 17:04:20.670924 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dq7q7" Nov 28 17:04:20 crc kubenswrapper[4710]: I1128 17:04:20.706801 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:21 crc kubenswrapper[4710]: I1128 17:04:21.619398 4710 generic.go:334] "Generic (PLEG): container finished" podID="dea56f67-6506-451e-965f-3ef66a34d8e7" containerID="41718e6da2d6c5ef99bb20fabd015bc8fd27d713a6396b300e63fb0f3e5e8ba3" exitCode=0 Nov 28 17:04:21 crc kubenswrapper[4710]: I1128 17:04:21.619695 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pllxk" event={"ID":"dea56f67-6506-451e-965f-3ef66a34d8e7","Type":"ContainerDied","Data":"41718e6da2d6c5ef99bb20fabd015bc8fd27d713a6396b300e63fb0f3e5e8ba3"} Nov 28 17:04:21 crc kubenswrapper[4710]: I1128 17:04:21.624357 4710 generic.go:334] "Generic (PLEG): container finished" podID="aeef35e5-e5cb-4fb3-af00-f5adca01d8e6" containerID="574b7c3e3d740359464f10bad892124f062f0bb6dc63eca88eb69000a30f1721" exitCode=0 Nov 28 17:04:21 crc kubenswrapper[4710]: I1128 17:04:21.624398 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9z7q" event={"ID":"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6","Type":"ContainerDied","Data":"574b7c3e3d740359464f10bad892124f062f0bb6dc63eca88eb69000a30f1721"} Nov 28 17:04:21 crc kubenswrapper[4710]: I1128 17:04:21.680327 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npsxw" Nov 28 17:04:21 crc kubenswrapper[4710]: I1128 17:04:21.929081 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:21 crc kubenswrapper[4710]: I1128 17:04:21.929370 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:21 crc kubenswrapper[4710]: I1128 17:04:21.969644 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:23 crc kubenswrapper[4710]: I1128 17:04:23.194865 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:23 crc kubenswrapper[4710]: I1128 17:04:23.195435 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:23 crc kubenswrapper[4710]: I1128 17:04:23.246094 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:23 crc kubenswrapper[4710]: I1128 17:04:23.635738 4710 generic.go:334] "Generic (PLEG): container finished" podID="aeef35e5-e5cb-4fb3-af00-f5adca01d8e6" containerID="f617096a81966588f950af6b1921ae72da55d76b9c87276fd902e4d1e31904cd" exitCode=0 Nov 28 17:04:23 crc kubenswrapper[4710]: I1128 17:04:23.636152 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9z7q" event={"ID":"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6","Type":"ContainerDied","Data":"f617096a81966588f950af6b1921ae72da55d76b9c87276fd902e4d1e31904cd"} Nov 28 17:04:24 crc kubenswrapper[4710]: I1128 17:04:24.645263 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-d9z7q" event={"ID":"aeef35e5-e5cb-4fb3-af00-f5adca01d8e6","Type":"ContainerStarted","Data":"31df25ef77968f336307cddb44dd6502c138238f293073d10f09115aef3a6bbf"} Nov 28 17:04:24 crc kubenswrapper[4710]: I1128 17:04:24.648407 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pllxk" event={"ID":"dea56f67-6506-451e-965f-3ef66a34d8e7","Type":"ContainerStarted","Data":"0c91aac11a39350e710f42f4edaca1c5dde06c273dee95c7a7aa0fb39cc5d277"} Nov 28 17:04:24 crc kubenswrapper[4710]: I1128 17:04:24.676739 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-d9z7q" podStartSLOduration=6.005089106 podStartE2EDuration="9.676719671s" podCreationTimestamp="2025-11-28 17:04:15 +0000 UTC" firstStartedPulling="2025-11-28 17:04:20.619971393 +0000 UTC m=+349.878271438" lastFinishedPulling="2025-11-28 17:04:24.291601948 +0000 UTC m=+353.549902003" observedRunningTime="2025-11-28 17:04:24.675486623 +0000 UTC m=+353.933786658" watchObservedRunningTime="2025-11-28 17:04:24.676719671 +0000 UTC m=+353.935019716" Nov 28 17:04:24 crc kubenswrapper[4710]: I1128 17:04:24.692680 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pllxk" podStartSLOduration=4.580968039 podStartE2EDuration="11.692662977s" podCreationTimestamp="2025-11-28 17:04:13 +0000 UTC" firstStartedPulling="2025-11-28 17:04:16.568388675 +0000 UTC m=+345.826688720" lastFinishedPulling="2025-11-28 17:04:23.680083613 +0000 UTC m=+352.938383658" observedRunningTime="2025-11-28 17:04:24.69176785 +0000 UTC m=+353.950067885" watchObservedRunningTime="2025-11-28 17:04:24.692662977 +0000 UTC m=+353.950963022" Nov 28 17:04:25 crc kubenswrapper[4710]: I1128 17:04:25.523027 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:25 crc kubenswrapper[4710]: I1128 17:04:25.523373 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:26 crc kubenswrapper[4710]: I1128 17:04:26.566479 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-d9z7q" podUID="aeef35e5-e5cb-4fb3-af00-f5adca01d8e6" containerName="registry-server" probeResult="failure" output=< Nov 28 17:04:26 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:04:26 crc kubenswrapper[4710]: > Nov 28 17:04:31 crc kubenswrapper[4710]: I1128 17:04:31.975620 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wdmpr" Nov 28 17:04:33 crc kubenswrapper[4710]: I1128 17:04:33.246355 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gj4kl" Nov 28 17:04:34 crc kubenswrapper[4710]: I1128 17:04:34.333331 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:34 crc kubenswrapper[4710]: I1128 17:04:34.333663 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:34 crc kubenswrapper[4710]: I1128 17:04:34.388177 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:34 crc kubenswrapper[4710]: I1128 17:04:34.770494 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pllxk" Nov 28 17:04:35 crc kubenswrapper[4710]: I1128 17:04:35.559738 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:35 crc kubenswrapper[4710]: I1128 17:04:35.605544 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-d9z7q" Nov 28 17:04:43 crc kubenswrapper[4710]: I1128 17:04:43.344498 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:04:43 crc kubenswrapper[4710]: I1128 17:04:43.345121 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:05:02 crc kubenswrapper[4710]: I1128 17:05:02.156078 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:05:02 crc kubenswrapper[4710]: I1128 17:05:02.156930 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:05:02 crc kubenswrapper[4710]: I1128 17:05:02.157366 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:05:02 crc kubenswrapper[4710]: I1128 17:05:02.164993 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:05:02 crc kubenswrapper[4710]: I1128 17:05:02.455728 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 28 17:05:02 crc kubenswrapper[4710]: W1128 17:05:02.967253 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-7a7df8abdfb75fdfd7f6e4eb7bc5ccb9b689f633eac4af1a717db1ef38074cdc WatchSource:0}: Error finding container 7a7df8abdfb75fdfd7f6e4eb7bc5ccb9b689f633eac4af1a717db1ef38074cdc: Status 404 returned error can't find the container with id 7a7df8abdfb75fdfd7f6e4eb7bc5ccb9b689f633eac4af1a717db1ef38074cdc Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.168811 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.168965 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.174416 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.174505 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.243981 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.442810 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:05:03 crc kubenswrapper[4710]: W1128 17:05:03.461247 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-bb812304c94c10822ddb6fc7fec3b64eea4bab4b41a88682f72b4a2e2aee6ddf WatchSource:0}: Error finding container bb812304c94c10822ddb6fc7fec3b64eea4bab4b41a88682f72b4a2e2aee6ddf: Status 404 returned error can't find the container with id bb812304c94c10822ddb6fc7fec3b64eea4bab4b41a88682f72b4a2e2aee6ddf Nov 28 17:05:03 crc kubenswrapper[4710]: W1128 17:05:03.879007 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-e16aae8751f2100da9f5899a6a75cca449d9d648fae9c2902d2a511a72ee7718 WatchSource:0}: Error finding container e16aae8751f2100da9f5899a6a75cca449d9d648fae9c2902d2a511a72ee7718: Status 404 returned error can't find the container with id e16aae8751f2100da9f5899a6a75cca449d9d648fae9c2902d2a511a72ee7718 Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.946744 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e16aae8751f2100da9f5899a6a75cca449d9d648fae9c2902d2a511a72ee7718"} Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.948707 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"808879c7a4b0124698bf390903c5c4f9b753354bfd912daa5779eb8c36367335"} Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.948778 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bb812304c94c10822ddb6fc7fec3b64eea4bab4b41a88682f72b4a2e2aee6ddf"} Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.953782 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0daf66d0c0cfbb886ec6a847787adc6979d939ecefba0bc54ba937621b442c29"} Nov 28 17:05:03 crc kubenswrapper[4710]: I1128 17:05:03.953841 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7a7df8abdfb75fdfd7f6e4eb7bc5ccb9b689f633eac4af1a717db1ef38074cdc"} Nov 28 17:05:04 crc kubenswrapper[4710]: I1128 17:05:04.961607 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4a00a586e1072348c5d752018ccdbabe16e8fec6e81b4250d6e0ea4073180cf3"} Nov 28 17:05:04 crc kubenswrapper[4710]: I1128 17:05:04.962019 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:05:13 crc kubenswrapper[4710]: I1128 17:05:13.343787 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:05:13 crc kubenswrapper[4710]: I1128 17:05:13.344326 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:05:43 crc kubenswrapper[4710]: I1128 17:05:43.344516 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:05:43 crc kubenswrapper[4710]: I1128 17:05:43.345035 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:05:43 crc kubenswrapper[4710]: I1128 17:05:43.345077 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:05:43 crc kubenswrapper[4710]: I1128 17:05:43.345534 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"456a00d5cd0fbfc13a479799f023f2982c20805bb4d32bd660ed7b512390b959"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:05:43 crc kubenswrapper[4710]: I1128 17:05:43.345583 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://456a00d5cd0fbfc13a479799f023f2982c20805bb4d32bd660ed7b512390b959" gracePeriod=600 Nov 28 17:05:43 crc kubenswrapper[4710]: I1128 17:05:43.446367 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 28 17:05:44 crc kubenswrapper[4710]: I1128 17:05:44.237574 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="456a00d5cd0fbfc13a479799f023f2982c20805bb4d32bd660ed7b512390b959" exitCode=0 Nov 28 17:05:44 crc kubenswrapper[4710]: I1128 17:05:44.238231 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"456a00d5cd0fbfc13a479799f023f2982c20805bb4d32bd660ed7b512390b959"} Nov 28 17:05:44 crc kubenswrapper[4710]: I1128 17:05:44.238562 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"503a90972a7301443a4a3341e128be8edb746f7d27a04b1ad0ecedf9ae666272"} Nov 28 17:05:44 crc kubenswrapper[4710]: I1128 17:05:44.238669 4710 scope.go:117] "RemoveContainer" containerID="eb9c522d827df20dc90c8e139d2f487367f317d525130206bd326ced1362083e" Nov 28 17:07:43 crc kubenswrapper[4710]: I1128 17:07:43.344214 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:07:43 crc kubenswrapper[4710]: I1128 17:07:43.344813 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:08:13 crc kubenswrapper[4710]: I1128 17:08:13.344400 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:08:13 crc kubenswrapper[4710]: I1128 17:08:13.345796 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:08:31 crc kubenswrapper[4710]: I1128 17:08:31.987609 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-cbsnk"] Nov 28 17:08:31 crc kubenswrapper[4710]: I1128 17:08:31.989103 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-cbsnk" Nov 28 17:08:31 crc kubenswrapper[4710]: I1128 17:08:31.990808 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 28 17:08:31 crc kubenswrapper[4710]: I1128 17:08:31.991753 4710 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-8dkzh" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.002237 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ns9h5"] Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.003003 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ns9h5" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.004308 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.007404 4710 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-cgw5k" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.008225 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-cbsnk"] Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.021019 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ns9h5"] Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.026973 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kkqp9"] Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.027871 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.030093 4710 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-tmkbd" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.043658 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kkqp9"] Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.126893 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpsmw\" (UniqueName: \"kubernetes.io/projected/4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9-kube-api-access-qpsmw\") pod \"cert-manager-5b446d88c5-ns9h5\" (UID: \"4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9\") " pod="cert-manager/cert-manager-5b446d88c5-ns9h5" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.126941 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhjjq\" (UniqueName: \"kubernetes.io/projected/2a677c3f-bd3b-4381-893a-e38debf47432-kube-api-access-qhjjq\") pod \"cert-manager-cainjector-7f985d654d-cbsnk\" (UID: \"2a677c3f-bd3b-4381-893a-e38debf47432\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-cbsnk" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.126985 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbphz\" (UniqueName: \"kubernetes.io/projected/67f3d046-0b7e-4f0f-8d7b-b02acc495a44-kube-api-access-lbphz\") pod \"cert-manager-webhook-5655c58dd6-kkqp9\" (UID: \"67f3d046-0b7e-4f0f-8d7b-b02acc495a44\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.228394 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpsmw\" (UniqueName: \"kubernetes.io/projected/4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9-kube-api-access-qpsmw\") pod \"cert-manager-5b446d88c5-ns9h5\" (UID: \"4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9\") " pod="cert-manager/cert-manager-5b446d88c5-ns9h5" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.228449 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhjjq\" (UniqueName: \"kubernetes.io/projected/2a677c3f-bd3b-4381-893a-e38debf47432-kube-api-access-qhjjq\") pod \"cert-manager-cainjector-7f985d654d-cbsnk\" (UID: \"2a677c3f-bd3b-4381-893a-e38debf47432\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-cbsnk" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.228505 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbphz\" (UniqueName: \"kubernetes.io/projected/67f3d046-0b7e-4f0f-8d7b-b02acc495a44-kube-api-access-lbphz\") pod \"cert-manager-webhook-5655c58dd6-kkqp9\" (UID: \"67f3d046-0b7e-4f0f-8d7b-b02acc495a44\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.247233 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpsmw\" (UniqueName: \"kubernetes.io/projected/4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9-kube-api-access-qpsmw\") pod \"cert-manager-5b446d88c5-ns9h5\" (UID: \"4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9\") " pod="cert-manager/cert-manager-5b446d88c5-ns9h5" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.247229 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhjjq\" (UniqueName: \"kubernetes.io/projected/2a677c3f-bd3b-4381-893a-e38debf47432-kube-api-access-qhjjq\") pod \"cert-manager-cainjector-7f985d654d-cbsnk\" (UID: \"2a677c3f-bd3b-4381-893a-e38debf47432\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-cbsnk" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.249723 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbphz\" (UniqueName: \"kubernetes.io/projected/67f3d046-0b7e-4f0f-8d7b-b02acc495a44-kube-api-access-lbphz\") pod \"cert-manager-webhook-5655c58dd6-kkqp9\" (UID: \"67f3d046-0b7e-4f0f-8d7b-b02acc495a44\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.307160 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-cbsnk" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.321603 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-ns9h5" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.348332 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.573878 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-ns9h5"] Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.580949 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.610299 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-kkqp9"] Nov 28 17:08:32 crc kubenswrapper[4710]: W1128 17:08:32.613379 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67f3d046_0b7e_4f0f_8d7b_b02acc495a44.slice/crio-f3bc853206037cccdd8ade7b787825079c76bd33e8f443fb6da4b2690141133d WatchSource:0}: Error finding container f3bc853206037cccdd8ade7b787825079c76bd33e8f443fb6da4b2690141133d: Status 404 returned error can't find the container with id f3bc853206037cccdd8ade7b787825079c76bd33e8f443fb6da4b2690141133d Nov 28 17:08:32 crc kubenswrapper[4710]: I1128 17:08:32.712719 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-cbsnk"] Nov 28 17:08:32 crc kubenswrapper[4710]: W1128 17:08:32.716318 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a677c3f_bd3b_4381_893a_e38debf47432.slice/crio-3ad22f9d2db228fa2f3abf5a080dc90e0f570582d653ca12cfa205a649824fa5 WatchSource:0}: Error finding container 3ad22f9d2db228fa2f3abf5a080dc90e0f570582d653ca12cfa205a649824fa5: Status 404 returned error can't find the container with id 3ad22f9d2db228fa2f3abf5a080dc90e0f570582d653ca12cfa205a649824fa5 Nov 28 17:08:33 crc kubenswrapper[4710]: I1128 17:08:33.362428 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ns9h5" event={"ID":"4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9","Type":"ContainerStarted","Data":"f22fa981cae8778b1af884b885c72c35c2a0addb05f1050726521411d85cb489"} Nov 28 17:08:33 crc kubenswrapper[4710]: I1128 17:08:33.363731 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" event={"ID":"67f3d046-0b7e-4f0f-8d7b-b02acc495a44","Type":"ContainerStarted","Data":"f3bc853206037cccdd8ade7b787825079c76bd33e8f443fb6da4b2690141133d"} Nov 28 17:08:33 crc kubenswrapper[4710]: I1128 17:08:33.364671 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-cbsnk" event={"ID":"2a677c3f-bd3b-4381-893a-e38debf47432","Type":"ContainerStarted","Data":"3ad22f9d2db228fa2f3abf5a080dc90e0f570582d653ca12cfa205a649824fa5"} Nov 28 17:08:37 crc kubenswrapper[4710]: I1128 17:08:37.399974 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-ns9h5" event={"ID":"4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9","Type":"ContainerStarted","Data":"dfa37637a55e7106ba07e7696a47a293e9f1f09e069975c14979da156f9c164a"} Nov 28 17:08:37 crc kubenswrapper[4710]: I1128 17:08:37.401438 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" event={"ID":"67f3d046-0b7e-4f0f-8d7b-b02acc495a44","Type":"ContainerStarted","Data":"5d88753cbe07c5a3293d00734e4d282662cffef002f1728e4fb20472ca5cdf4b"} Nov 28 17:08:37 crc kubenswrapper[4710]: I1128 17:08:37.401527 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" Nov 28 17:08:37 crc kubenswrapper[4710]: I1128 17:08:37.403369 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-cbsnk" event={"ID":"2a677c3f-bd3b-4381-893a-e38debf47432","Type":"ContainerStarted","Data":"3de174f6991849f37ad9948beb2d1e82693e408f4421c4c38f64ec7d4885c5a7"} Nov 28 17:08:37 crc kubenswrapper[4710]: I1128 17:08:37.436882 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-ns9h5" podStartSLOduration=2.55087156 podStartE2EDuration="6.436860566s" podCreationTimestamp="2025-11-28 17:08:31 +0000 UTC" firstStartedPulling="2025-11-28 17:08:32.580639774 +0000 UTC m=+601.838939819" lastFinishedPulling="2025-11-28 17:08:36.46662878 +0000 UTC m=+605.724928825" observedRunningTime="2025-11-28 17:08:37.417905397 +0000 UTC m=+606.676205442" watchObservedRunningTime="2025-11-28 17:08:37.436860566 +0000 UTC m=+606.695160621" Nov 28 17:08:37 crc kubenswrapper[4710]: I1128 17:08:37.438338 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" podStartSLOduration=2.579103944 podStartE2EDuration="6.438328292s" podCreationTimestamp="2025-11-28 17:08:31 +0000 UTC" firstStartedPulling="2025-11-28 17:08:32.615812768 +0000 UTC m=+601.874112813" lastFinishedPulling="2025-11-28 17:08:36.475037116 +0000 UTC m=+605.733337161" observedRunningTime="2025-11-28 17:08:37.43824527 +0000 UTC m=+606.696545315" watchObservedRunningTime="2025-11-28 17:08:37.438328292 +0000 UTC m=+606.696628337" Nov 28 17:08:37 crc kubenswrapper[4710]: I1128 17:08:37.452504 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-cbsnk" podStartSLOduration=2.690937684 podStartE2EDuration="6.45248273s" podCreationTimestamp="2025-11-28 17:08:31 +0000 UTC" firstStartedPulling="2025-11-28 17:08:32.718933652 +0000 UTC m=+601.977233697" lastFinishedPulling="2025-11-28 17:08:36.480478678 +0000 UTC m=+605.738778743" observedRunningTime="2025-11-28 17:08:37.447866185 +0000 UTC m=+606.706166230" watchObservedRunningTime="2025-11-28 17:08:37.45248273 +0000 UTC m=+606.710782785" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.482021 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzbq9"] Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.483371 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovn-controller" containerID="cri-o://40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85" gracePeriod=30 Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.483419 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="nbdb" containerID="cri-o://51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8" gracePeriod=30 Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.483493 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="northd" containerID="cri-o://f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c" gracePeriod=30 Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.483544 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103" gracePeriod=30 Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.483567 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="sbdb" containerID="cri-o://5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164" gracePeriod=30 Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.483593 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kube-rbac-proxy-node" containerID="cri-o://9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259" gracePeriod=30 Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.483635 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovn-acl-logging" containerID="cri-o://6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312" gracePeriod=30 Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.521066 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" containerID="cri-o://ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0" gracePeriod=30 Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.777195 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/3.log" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.779265 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovn-acl-logging/0.log" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.779744 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovn-controller/0.log" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.780187 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831422 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzjsc"] Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831635 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kube-rbac-proxy-node" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831650 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kube-rbac-proxy-node" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831663 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831670 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831680 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="sbdb" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831685 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="sbdb" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831693 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831699 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831706 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kubecfg-setup" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831711 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kubecfg-setup" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831721 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831727 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831735 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovn-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831741 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovn-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831748 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831756 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831787 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovn-acl-logging" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831795 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovn-acl-logging" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831801 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831806 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831813 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="northd" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831819 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="northd" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.831827 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="nbdb" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831833 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="nbdb" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831940 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovn-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831949 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831960 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831968 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831976 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="sbdb" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831983 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="nbdb" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831989 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kube-rbac-proxy-ovn-metrics" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.831998 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="kube-rbac-proxy-node" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.832004 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="northd" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.832012 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovn-acl-logging" Nov 28 17:08:41 crc kubenswrapper[4710]: E1128 17:08:41.832127 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.832135 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.832228 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.832239 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerName="ovnkube-controller" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.833901 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.874509 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-log-socket\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.874858 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-kubelet\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.874978 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-etc-openvswitch\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.874989 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-log-socket" (OuterVolumeSpecName: "log-socket") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875070 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-node-log\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875107 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875050 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875243 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pzd6\" (UniqueName: \"kubernetes.io/projected/bcf34ad7-9bed-49eb-ad10-20bc5825292a-kube-api-access-6pzd6\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875248 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-node-log" (OuterVolumeSpecName: "node-log") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875316 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-systemd-units\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875365 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-netns\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875392 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-bin\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875427 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875453 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-ovn\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875454 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875476 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-netd\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875466 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875487 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875523 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovn-node-metrics-cert\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875541 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875551 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-config\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875575 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-var-lib-openvswitch\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875602 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-env-overrides\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875623 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875631 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-ovn-kubernetes\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875693 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875706 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-systemd\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875793 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875831 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-script-lib\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875870 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-openvswitch\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875891 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-slash\") pod \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\" (UID: \"bcf34ad7-9bed-49eb-ad10-20bc5825292a\") " Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875904 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.875948 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876019 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-slash" (OuterVolumeSpecName: "host-slash") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876059 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876243 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876360 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876690 4710 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876714 4710 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876739 4710 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876750 4710 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876780 4710 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876791 4710 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876799 4710 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876807 4710 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876816 4710 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876828 4710 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876857 4710 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876867 4710 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876876 4710 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-slash\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876889 4710 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-log-socket\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876898 4710 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876906 4710 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-node-log\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.876933 4710 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.880926 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.881072 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcf34ad7-9bed-49eb-ad10-20bc5825292a-kube-api-access-6pzd6" (OuterVolumeSpecName: "kube-api-access-6pzd6") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "kube-api-access-6pzd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.892039 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bcf34ad7-9bed-49eb-ad10-20bc5825292a" (UID: "bcf34ad7-9bed-49eb-ad10-20bc5825292a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978287 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovnkube-config\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978338 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovnkube-script-lib\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978363 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-slash\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978378 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-node-log\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978392 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978411 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-env-overrides\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978446 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-ovn\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978461 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978544 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978613 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-var-lib-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978678 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjrm9\" (UniqueName: \"kubernetes.io/projected/1dbb7a44-f103-4680-898a-8b1e07d4924f-kube-api-access-vjrm9\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978705 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-etc-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978732 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-cni-netd\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978848 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-systemd\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978896 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-kubelet\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.978942 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-systemd-units\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.979003 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-run-netns\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.979043 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-log-socket\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.979086 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-cni-bin\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.979125 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovn-node-metrics-cert\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.979218 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pzd6\" (UniqueName: \"kubernetes.io/projected/bcf34ad7-9bed-49eb-ad10-20bc5825292a-kube-api-access-6pzd6\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.979241 4710 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bcf34ad7-9bed-49eb-ad10-20bc5825292a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:41 crc kubenswrapper[4710]: I1128 17:08:41.979250 4710 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bcf34ad7-9bed-49eb-ad10-20bc5825292a-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080122 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-log-socket\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080188 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-cni-bin\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080244 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovn-node-metrics-cert\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080246 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-log-socket\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080291 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovnkube-config\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080308 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-cni-bin\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080338 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovnkube-script-lib\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080379 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-slash\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080407 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-node-log\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080435 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080467 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-env-overrides\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080504 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080534 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-ovn\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080565 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080596 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-var-lib-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080647 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjrm9\" (UniqueName: \"kubernetes.io/projected/1dbb7a44-f103-4680-898a-8b1e07d4924f-kube-api-access-vjrm9\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080685 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-etc-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080717 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-cni-netd\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080719 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080752 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-systemd\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080806 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-node-log\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080820 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-kubelet\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080822 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-var-lib-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080843 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-run-ovn-kubernetes\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080856 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-systemd-units\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080874 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080782 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-slash\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080899 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-run-netns\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080911 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-systemd\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.080953 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-run-ovn\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.081013 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-etc-openvswitch\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.081031 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-kubelet\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.081048 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-systemd-units\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.081020 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-run-netns\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.081038 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1dbb7a44-f103-4680-898a-8b1e07d4924f-host-cni-netd\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.081080 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovnkube-config\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.081300 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-env-overrides\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.081375 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovnkube-script-lib\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.084177 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1dbb7a44-f103-4680-898a-8b1e07d4924f-ovn-node-metrics-cert\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.103237 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjrm9\" (UniqueName: \"kubernetes.io/projected/1dbb7a44-f103-4680-898a-8b1e07d4924f-kube-api-access-vjrm9\") pod \"ovnkube-node-mzjsc\" (UID: \"1dbb7a44-f103-4680-898a-8b1e07d4924f\") " pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.146373 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:42 crc kubenswrapper[4710]: W1128 17:08:42.172709 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1dbb7a44_f103_4680_898a_8b1e07d4924f.slice/crio-d98e5099cf1eece8869953d959fdc7030799cccf5e2328c2d9eb05292ffc8c76 WatchSource:0}: Error finding container d98e5099cf1eece8869953d959fdc7030799cccf5e2328c2d9eb05292ffc8c76: Status 404 returned error can't find the container with id d98e5099cf1eece8869953d959fdc7030799cccf5e2328c2d9eb05292ffc8c76 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.351073 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-kkqp9" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.439009 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/2.log" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.439784 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/1.log" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.439855 4710 generic.go:334] "Generic (PLEG): container finished" podID="b2ae360a-eba6-4e76-9942-83f5c21f3877" containerID="a629b14c6ba490c00394b27559807625366fd25664c19466b47c4835e45f6415" exitCode=2 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.439917 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2j8nb" event={"ID":"b2ae360a-eba6-4e76-9942-83f5c21f3877","Type":"ContainerDied","Data":"a629b14c6ba490c00394b27559807625366fd25664c19466b47c4835e45f6415"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.440004 4710 scope.go:117] "RemoveContainer" containerID="f20c03525a66139ff45c2901ac6d842794da8eddfc1f0a094d7de6367e406b4c" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.440574 4710 scope.go:117] "RemoveContainer" containerID="a629b14c6ba490c00394b27559807625366fd25664c19466b47c4835e45f6415" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.440778 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2j8nb_openshift-multus(b2ae360a-eba6-4e76-9942-83f5c21f3877)\"" pod="openshift-multus/multus-2j8nb" podUID="b2ae360a-eba6-4e76-9942-83f5c21f3877" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.442632 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovnkube-controller/3.log" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.447311 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovn-acl-logging/0.log" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448211 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-mzbq9_bcf34ad7-9bed-49eb-ad10-20bc5825292a/ovn-controller/0.log" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448668 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0" exitCode=0 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448692 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164" exitCode=0 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448700 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8" exitCode=0 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448711 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c" exitCode=0 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448723 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103" exitCode=0 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448740 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259" exitCode=0 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448750 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312" exitCode=143 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448757 4710 generic.go:334] "Generic (PLEG): container finished" podID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" containerID="40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85" exitCode=143 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448791 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448790 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448840 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448855 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448868 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448880 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448892 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448905 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448917 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448925 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448931 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448940 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448946 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448952 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448957 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448962 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448966 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448973 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448981 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448987 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448992 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.448997 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449002 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449007 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449012 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449017 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449022 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449027 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449034 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449044 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449050 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449055 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449062 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449067 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449072 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449079 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449085 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449094 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449106 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449117 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzbq9" event={"ID":"bcf34ad7-9bed-49eb-ad10-20bc5825292a","Type":"ContainerDied","Data":"1f95ef1a130a6db1354044f3cddb37e9f50f871760b4165713bb1a8370ad3de0"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449129 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449136 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449143 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449150 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449156 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449162 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449170 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449177 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449183 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.449189 4710 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.455253 4710 generic.go:334] "Generic (PLEG): container finished" podID="1dbb7a44-f103-4680-898a-8b1e07d4924f" containerID="2d452c13c972df6a43c2676c42a81678e15fa814f13519564a5d2b0d2f673aee" exitCode=0 Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.455289 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerDied","Data":"2d452c13c972df6a43c2676c42a81678e15fa814f13519564a5d2b0d2f673aee"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.455313 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"d98e5099cf1eece8869953d959fdc7030799cccf5e2328c2d9eb05292ffc8c76"} Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.479944 4710 scope.go:117] "RemoveContainer" containerID="ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.520782 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzbq9"] Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.523769 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.524888 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mzbq9"] Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.544524 4710 scope.go:117] "RemoveContainer" containerID="5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.557922 4710 scope.go:117] "RemoveContainer" containerID="51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.572936 4710 scope.go:117] "RemoveContainer" containerID="f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.588588 4710 scope.go:117] "RemoveContainer" containerID="1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.603207 4710 scope.go:117] "RemoveContainer" containerID="9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.638553 4710 scope.go:117] "RemoveContainer" containerID="6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.672833 4710 scope.go:117] "RemoveContainer" containerID="40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.689564 4710 scope.go:117] "RemoveContainer" containerID="ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.712217 4710 scope.go:117] "RemoveContainer" containerID="ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.712693 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": container with ID starting with ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0 not found: ID does not exist" containerID="ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.712742 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} err="failed to get container status \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": rpc error: code = NotFound desc = could not find container \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": container with ID starting with ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.712803 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.715195 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": container with ID starting with b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e not found: ID does not exist" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.715221 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} err="failed to get container status \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": rpc error: code = NotFound desc = could not find container \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": container with ID starting with b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.715237 4710 scope.go:117] "RemoveContainer" containerID="5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.715645 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": container with ID starting with 5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164 not found: ID does not exist" containerID="5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.715692 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} err="failed to get container status \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": rpc error: code = NotFound desc = could not find container \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": container with ID starting with 5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.715706 4710 scope.go:117] "RemoveContainer" containerID="51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.716021 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": container with ID starting with 51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8 not found: ID does not exist" containerID="51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.716059 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} err="failed to get container status \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": rpc error: code = NotFound desc = could not find container \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": container with ID starting with 51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.716088 4710 scope.go:117] "RemoveContainer" containerID="f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.716357 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": container with ID starting with f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c not found: ID does not exist" containerID="f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.716379 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} err="failed to get container status \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": rpc error: code = NotFound desc = could not find container \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": container with ID starting with f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.716393 4710 scope.go:117] "RemoveContainer" containerID="1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.716618 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": container with ID starting with 1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103 not found: ID does not exist" containerID="1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.716634 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} err="failed to get container status \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": rpc error: code = NotFound desc = could not find container \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": container with ID starting with 1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.716651 4710 scope.go:117] "RemoveContainer" containerID="9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.716904 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": container with ID starting with 9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259 not found: ID does not exist" containerID="9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.716922 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} err="failed to get container status \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": rpc error: code = NotFound desc = could not find container \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": container with ID starting with 9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.716936 4710 scope.go:117] "RemoveContainer" containerID="6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.717162 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": container with ID starting with 6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312 not found: ID does not exist" containerID="6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.717220 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} err="failed to get container status \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": rpc error: code = NotFound desc = could not find container \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": container with ID starting with 6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.717238 4710 scope.go:117] "RemoveContainer" containerID="40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.717521 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": container with ID starting with 40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85 not found: ID does not exist" containerID="40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.717540 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} err="failed to get container status \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": rpc error: code = NotFound desc = could not find container \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": container with ID starting with 40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.717556 4710 scope.go:117] "RemoveContainer" containerID="ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f" Nov 28 17:08:42 crc kubenswrapper[4710]: E1128 17:08:42.717817 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": container with ID starting with ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f not found: ID does not exist" containerID="ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.717842 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} err="failed to get container status \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": rpc error: code = NotFound desc = could not find container \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": container with ID starting with ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.717863 4710 scope.go:117] "RemoveContainer" containerID="ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.718543 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} err="failed to get container status \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": rpc error: code = NotFound desc = could not find container \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": container with ID starting with ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.718562 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.718781 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} err="failed to get container status \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": rpc error: code = NotFound desc = could not find container \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": container with ID starting with b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.718804 4710 scope.go:117] "RemoveContainer" containerID="5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.718989 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} err="failed to get container status \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": rpc error: code = NotFound desc = could not find container \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": container with ID starting with 5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.719011 4710 scope.go:117] "RemoveContainer" containerID="51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.719274 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} err="failed to get container status \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": rpc error: code = NotFound desc = could not find container \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": container with ID starting with 51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.719316 4710 scope.go:117] "RemoveContainer" containerID="f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.719571 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} err="failed to get container status \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": rpc error: code = NotFound desc = could not find container \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": container with ID starting with f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.719597 4710 scope.go:117] "RemoveContainer" containerID="1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.719861 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} err="failed to get container status \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": rpc error: code = NotFound desc = could not find container \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": container with ID starting with 1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.719883 4710 scope.go:117] "RemoveContainer" containerID="9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.720177 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} err="failed to get container status \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": rpc error: code = NotFound desc = could not find container \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": container with ID starting with 9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.720213 4710 scope.go:117] "RemoveContainer" containerID="6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.720464 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} err="failed to get container status \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": rpc error: code = NotFound desc = could not find container \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": container with ID starting with 6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.720490 4710 scope.go:117] "RemoveContainer" containerID="40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.720929 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} err="failed to get container status \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": rpc error: code = NotFound desc = could not find container \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": container with ID starting with 40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.720988 4710 scope.go:117] "RemoveContainer" containerID="ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.721398 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} err="failed to get container status \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": rpc error: code = NotFound desc = could not find container \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": container with ID starting with ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.721415 4710 scope.go:117] "RemoveContainer" containerID="ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.721798 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} err="failed to get container status \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": rpc error: code = NotFound desc = could not find container \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": container with ID starting with ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.721826 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.722221 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} err="failed to get container status \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": rpc error: code = NotFound desc = could not find container \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": container with ID starting with b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.722458 4710 scope.go:117] "RemoveContainer" containerID="5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.722906 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} err="failed to get container status \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": rpc error: code = NotFound desc = could not find container \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": container with ID starting with 5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.722931 4710 scope.go:117] "RemoveContainer" containerID="51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.723257 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} err="failed to get container status \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": rpc error: code = NotFound desc = could not find container \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": container with ID starting with 51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.723282 4710 scope.go:117] "RemoveContainer" containerID="f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.723803 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} err="failed to get container status \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": rpc error: code = NotFound desc = could not find container \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": container with ID starting with f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.723822 4710 scope.go:117] "RemoveContainer" containerID="1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.724304 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} err="failed to get container status \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": rpc error: code = NotFound desc = could not find container \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": container with ID starting with 1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.724355 4710 scope.go:117] "RemoveContainer" containerID="9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.724697 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} err="failed to get container status \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": rpc error: code = NotFound desc = could not find container \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": container with ID starting with 9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.724715 4710 scope.go:117] "RemoveContainer" containerID="6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.725003 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} err="failed to get container status \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": rpc error: code = NotFound desc = could not find container \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": container with ID starting with 6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.725025 4710 scope.go:117] "RemoveContainer" containerID="40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.725369 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} err="failed to get container status \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": rpc error: code = NotFound desc = could not find container \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": container with ID starting with 40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.725419 4710 scope.go:117] "RemoveContainer" containerID="ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.725705 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} err="failed to get container status \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": rpc error: code = NotFound desc = could not find container \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": container with ID starting with ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.725723 4710 scope.go:117] "RemoveContainer" containerID="ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.725906 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0"} err="failed to get container status \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": rpc error: code = NotFound desc = could not find container \"ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0\": container with ID starting with ae1447ac81f14ce81181faf8816143725e2ba9f389f92f4c5245efe037a9fbd0 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.725942 4710 scope.go:117] "RemoveContainer" containerID="b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.726233 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e"} err="failed to get container status \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": rpc error: code = NotFound desc = could not find container \"b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e\": container with ID starting with b47c3bd1f91151c232ff2f0c7036071b3d89edbbd02d9ee357580582aff6a78e not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.726251 4710 scope.go:117] "RemoveContainer" containerID="5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.726464 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164"} err="failed to get container status \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": rpc error: code = NotFound desc = could not find container \"5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164\": container with ID starting with 5293a41432c91acb1ea291c8240341bf21467f6dcd6cfe05693646ea68417164 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.726489 4710 scope.go:117] "RemoveContainer" containerID="51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.726746 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8"} err="failed to get container status \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": rpc error: code = NotFound desc = could not find container \"51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8\": container with ID starting with 51bf1e0fee69c28882edd9a407b694acfbd4fb0557190dfe5e62ff96ed9c08f8 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.726789 4710 scope.go:117] "RemoveContainer" containerID="f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.727360 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c"} err="failed to get container status \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": rpc error: code = NotFound desc = could not find container \"f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c\": container with ID starting with f8aa5ae2c12543884cc0351d71affb5c86de81dbd4ed2709ef5cda9a514a7f7c not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.727382 4710 scope.go:117] "RemoveContainer" containerID="1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.727678 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103"} err="failed to get container status \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": rpc error: code = NotFound desc = could not find container \"1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103\": container with ID starting with 1475c7a60cf21c09403c704154b44055f90e0d47c0b3e10716a51be2bdc00103 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.727696 4710 scope.go:117] "RemoveContainer" containerID="9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.729583 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259"} err="failed to get container status \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": rpc error: code = NotFound desc = could not find container \"9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259\": container with ID starting with 9da48965c9ce0f57c55e419243871bd4f74da2e9fb1fb7b70b412416e0a78259 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.729611 4710 scope.go:117] "RemoveContainer" containerID="6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.729969 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312"} err="failed to get container status \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": rpc error: code = NotFound desc = could not find container \"6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312\": container with ID starting with 6319cf0c1af08921b4b2a8a14ab1d0d94d4978b508a392b3cbd13c45142ff312 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.729992 4710 scope.go:117] "RemoveContainer" containerID="40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.731143 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85"} err="failed to get container status \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": rpc error: code = NotFound desc = could not find container \"40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85\": container with ID starting with 40537cdbf41fa07be45583cb1f53480d38cbdcb1b32a92397c1f0499b6ebbb85 not found: ID does not exist" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.731162 4710 scope.go:117] "RemoveContainer" containerID="ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f" Nov 28 17:08:42 crc kubenswrapper[4710]: I1128 17:08:42.731358 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f"} err="failed to get container status \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": rpc error: code = NotFound desc = could not find container \"ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f\": container with ID starting with ac1cf3a446601cadf90b4ed2c60d0dff9c784d47278e3c7fa61849d288747d2f not found: ID does not exist" Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.148400 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcf34ad7-9bed-49eb-ad10-20bc5825292a" path="/var/lib/kubelet/pods/bcf34ad7-9bed-49eb-ad10-20bc5825292a/volumes" Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.343679 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.343751 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.343828 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.344372 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"503a90972a7301443a4a3341e128be8edb746f7d27a04b1ad0ecedf9ae666272"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.344424 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://503a90972a7301443a4a3341e128be8edb746f7d27a04b1ad0ecedf9ae666272" gracePeriod=600 Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.465076 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="503a90972a7301443a4a3341e128be8edb746f7d27a04b1ad0ecedf9ae666272" exitCode=0 Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.465156 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"503a90972a7301443a4a3341e128be8edb746f7d27a04b1ad0ecedf9ae666272"} Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.465417 4710 scope.go:117] "RemoveContainer" containerID="456a00d5cd0fbfc13a479799f023f2982c20805bb4d32bd660ed7b512390b959" Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.468037 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/2.log" Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.474852 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"cac9c4ebd922b912b5cb5d5ce22d7f9de103752a20d075fa7b0e27ac4a2265fb"} Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.474888 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"236df6a5f6cef08cd86a4199d91e63193764df3a9d9989d2cd853f267356681b"} Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.474899 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"7943ff37c571f99f0f80ab914f61c2c8be28c86ee208941cdbda89bdb1cab135"} Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.474910 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"535c0b513c250a587727bd77a584614ebc4ce333d60ca47c6cdc0fd8cc6ceba9"} Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.474919 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"3775b4cc12ed7376fa2196c360df533a13e966112f3e76faf741a2c36550ec7b"} Nov 28 17:08:43 crc kubenswrapper[4710]: I1128 17:08:43.474934 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"9d51ef7da9d3a37bf88e3ab659ea974c5bade140add592517ce49182ab0ffe3e"} Nov 28 17:08:44 crc kubenswrapper[4710]: I1128 17:08:44.482241 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"c6d85207656f6d2601d2bdd070cb40b8f4df58d52a8f16d4308eea97c4776e87"} Nov 28 17:08:46 crc kubenswrapper[4710]: I1128 17:08:46.503702 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"8cf538ff9980dbe8ab81fdccd7836af233a1805cc15a647fecf3af50d1df28a4"} Nov 28 17:08:47 crc kubenswrapper[4710]: I1128 17:08:47.514763 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" event={"ID":"1dbb7a44-f103-4680-898a-8b1e07d4924f","Type":"ContainerStarted","Data":"2bf01b82e008a4196a0c890bb222683907cb5d18d9af7dd848bb35f243e6dec9"} Nov 28 17:08:47 crc kubenswrapper[4710]: I1128 17:08:47.515116 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:47 crc kubenswrapper[4710]: I1128 17:08:47.515162 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:47 crc kubenswrapper[4710]: I1128 17:08:47.515173 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:47 crc kubenswrapper[4710]: I1128 17:08:47.540099 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:47 crc kubenswrapper[4710]: I1128 17:08:47.540742 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:08:47 crc kubenswrapper[4710]: I1128 17:08:47.545063 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" podStartSLOduration=6.545046715 podStartE2EDuration="6.545046715s" podCreationTimestamp="2025-11-28 17:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:08:47.543142765 +0000 UTC m=+616.801442820" watchObservedRunningTime="2025-11-28 17:08:47.545046715 +0000 UTC m=+616.803346770" Nov 28 17:08:54 crc kubenswrapper[4710]: I1128 17:08:54.141834 4710 scope.go:117] "RemoveContainer" containerID="a629b14c6ba490c00394b27559807625366fd25664c19466b47c4835e45f6415" Nov 28 17:08:54 crc kubenswrapper[4710]: E1128 17:08:54.142452 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-2j8nb_openshift-multus(b2ae360a-eba6-4e76-9942-83f5c21f3877)\"" pod="openshift-multus/multus-2j8nb" podUID="b2ae360a-eba6-4e76-9942-83f5c21f3877" Nov 28 17:09:06 crc kubenswrapper[4710]: I1128 17:09:06.141333 4710 scope.go:117] "RemoveContainer" containerID="a629b14c6ba490c00394b27559807625366fd25664c19466b47c4835e45f6415" Nov 28 17:09:08 crc kubenswrapper[4710]: I1128 17:09:08.649242 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2j8nb_b2ae360a-eba6-4e76-9942-83f5c21f3877/kube-multus/2.log" Nov 28 17:09:08 crc kubenswrapper[4710]: I1128 17:09:08.649550 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2j8nb" event={"ID":"b2ae360a-eba6-4e76-9942-83f5c21f3877","Type":"ContainerStarted","Data":"308278c3b41d06aa11885f32e278f150aaa63e06519f407493c503310038a187"} Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.282605 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj"] Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.284307 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.287852 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.291678 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj"] Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.347389 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxqw8\" (UniqueName: \"kubernetes.io/projected/fbd014d4-ebd1-4399-8fe0-82dea587a945-kube-api-access-wxqw8\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.347451 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.347565 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.473347 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs"] Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.474446 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.482438 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs"] Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.528355 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.528407 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.528444 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.528471 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxqw8\" (UniqueName: \"kubernetes.io/projected/fbd014d4-ebd1-4399-8fe0-82dea587a945-kube-api-access-wxqw8\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.528490 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.528536 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm5cc\" (UniqueName: \"kubernetes.io/projected/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-kube-api-access-pm5cc\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.528999 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-util\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.529140 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-bundle\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.552057 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxqw8\" (UniqueName: \"kubernetes.io/projected/fbd014d4-ebd1-4399-8fe0-82dea587a945-kube-api-access-wxqw8\") pod \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.619678 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.629608 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.629687 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.629727 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm5cc\" (UniqueName: \"kubernetes.io/projected/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-kube-api-access-pm5cc\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.630116 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-bundle\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.630139 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-util\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.647450 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm5cc\" (UniqueName: \"kubernetes.io/projected/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-kube-api-access-pm5cc\") pod \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: E1128 17:09:09.649935 4710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace_fbd014d4-ebd1-4399-8fe0-82dea587a945_0(e1ff883ff38504a8e8631ed941b140bc05782117811a755c62a479798adfeaae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:09:09 crc kubenswrapper[4710]: E1128 17:09:09.650004 4710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace_fbd014d4-ebd1-4399-8fe0-82dea587a945_0(e1ff883ff38504a8e8631ed941b140bc05782117811a755c62a479798adfeaae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: E1128 17:09:09.650027 4710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace_fbd014d4-ebd1-4399-8fe0-82dea587a945_0(e1ff883ff38504a8e8631ed941b140bc05782117811a755c62a479798adfeaae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:09 crc kubenswrapper[4710]: E1128 17:09:09.650072 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace(fbd014d4-ebd1-4399-8fe0-82dea587a945)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace(fbd014d4-ebd1-4399-8fe0-82dea587a945)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace_fbd014d4-ebd1-4399-8fe0-82dea587a945_0(e1ff883ff38504a8e8631ed941b140bc05782117811a755c62a479798adfeaae): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" Nov 28 17:09:09 crc kubenswrapper[4710]: I1128 17:09:09.840196 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: E1128 17:09:09.863730 4710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda_0(35298c8f43afd97f46c6365e55f7584e8d326b65d208a92ffd81b28e17b6e4fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:09:09 crc kubenswrapper[4710]: E1128 17:09:09.863834 4710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda_0(35298c8f43afd97f46c6365e55f7584e8d326b65d208a92ffd81b28e17b6e4fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: E1128 17:09:09.863856 4710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda_0(35298c8f43afd97f46c6365e55f7584e8d326b65d208a92ffd81b28e17b6e4fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:09 crc kubenswrapper[4710]: E1128 17:09:09.863904 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace(dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace(dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda_0(35298c8f43afd97f46c6365e55f7584e8d326b65d208a92ffd81b28e17b6e4fc): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" Nov 28 17:09:10 crc kubenswrapper[4710]: I1128 17:09:10.660352 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:10 crc kubenswrapper[4710]: I1128 17:09:10.660442 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:10 crc kubenswrapper[4710]: I1128 17:09:10.661097 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:10 crc kubenswrapper[4710]: I1128 17:09:10.661297 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:10 crc kubenswrapper[4710]: E1128 17:09:10.716945 4710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace_fbd014d4-ebd1-4399-8fe0-82dea587a945_0(98a285fd69433c885e1ff1c725a1bc6840096f295dfae16ab45a78075fe28bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:09:10 crc kubenswrapper[4710]: E1128 17:09:10.717627 4710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace_fbd014d4-ebd1-4399-8fe0-82dea587a945_0(98a285fd69433c885e1ff1c725a1bc6840096f295dfae16ab45a78075fe28bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:10 crc kubenswrapper[4710]: E1128 17:09:10.717688 4710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace_fbd014d4-ebd1-4399-8fe0-82dea587a945_0(98a285fd69433c885e1ff1c725a1bc6840096f295dfae16ab45a78075fe28bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:10 crc kubenswrapper[4710]: E1128 17:09:10.717811 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace(fbd014d4-ebd1-4399-8fe0-82dea587a945)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace(fbd014d4-ebd1-4399-8fe0-82dea587a945)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_openshift-marketplace_fbd014d4-ebd1-4399-8fe0-82dea587a945_0(98a285fd69433c885e1ff1c725a1bc6840096f295dfae16ab45a78075fe28bb0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" Nov 28 17:09:10 crc kubenswrapper[4710]: E1128 17:09:10.722654 4710 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda_0(58b69188535e36ce560141170fa553543a86d5e29bec7611ab9b03837b5698c8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 28 17:09:10 crc kubenswrapper[4710]: E1128 17:09:10.722707 4710 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda_0(58b69188535e36ce560141170fa553543a86d5e29bec7611ab9b03837b5698c8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:10 crc kubenswrapper[4710]: E1128 17:09:10.722727 4710 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda_0(58b69188535e36ce560141170fa553543a86d5e29bec7611ab9b03837b5698c8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:10 crc kubenswrapper[4710]: E1128 17:09:10.722915 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace(dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace(dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_openshift-marketplace_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda_0(58b69188535e36ce560141170fa553543a86d5e29bec7611ab9b03837b5698c8): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" Nov 28 17:09:12 crc kubenswrapper[4710]: I1128 17:09:12.174393 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mzjsc" Nov 28 17:09:21 crc kubenswrapper[4710]: I1128 17:09:21.145405 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:21 crc kubenswrapper[4710]: I1128 17:09:21.147803 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:21 crc kubenswrapper[4710]: I1128 17:09:21.379693 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj"] Nov 28 17:09:21 crc kubenswrapper[4710]: I1128 17:09:21.733015 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" event={"ID":"fbd014d4-ebd1-4399-8fe0-82dea587a945","Type":"ContainerStarted","Data":"047281b19485fd5415c5d2329a51f9451b45296281c88bf3f115902cf68cf086"} Nov 28 17:09:22 crc kubenswrapper[4710]: I1128 17:09:22.140791 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:22 crc kubenswrapper[4710]: I1128 17:09:22.141364 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:22 crc kubenswrapper[4710]: I1128 17:09:22.343947 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs"] Nov 28 17:09:22 crc kubenswrapper[4710]: W1128 17:09:22.353095 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc4287a5_9f7c_4c3e_b084_f45fd0d4ddda.slice/crio-4a28eb000376bbb400aac911c34af0b367d4c679cea851cd3e621e8d972b63bb WatchSource:0}: Error finding container 4a28eb000376bbb400aac911c34af0b367d4c679cea851cd3e621e8d972b63bb: Status 404 returned error can't find the container with id 4a28eb000376bbb400aac911c34af0b367d4c679cea851cd3e621e8d972b63bb Nov 28 17:09:22 crc kubenswrapper[4710]: I1128 17:09:22.742184 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" event={"ID":"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda","Type":"ContainerStarted","Data":"4a28eb000376bbb400aac911c34af0b367d4c679cea851cd3e621e8d972b63bb"} Nov 28 17:09:24 crc kubenswrapper[4710]: I1128 17:09:24.763547 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" event={"ID":"fbd014d4-ebd1-4399-8fe0-82dea587a945","Type":"ContainerStarted","Data":"2ce0e66ffef6781373311b29d24bb6c761b752e473c1f799b06c0610c4cc3c5f"} Nov 28 17:09:25 crc kubenswrapper[4710]: I1128 17:09:25.770731 4710 generic.go:334] "Generic (PLEG): container finished" podID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerID="cda92ced4f9b94de12b57d8aa940d1373f036935c1d1951d4ce23fa399a2eee4" exitCode=0 Nov 28 17:09:25 crc kubenswrapper[4710]: I1128 17:09:25.771089 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" event={"ID":"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda","Type":"ContainerDied","Data":"cda92ced4f9b94de12b57d8aa940d1373f036935c1d1951d4ce23fa399a2eee4"} Nov 28 17:09:25 crc kubenswrapper[4710]: I1128 17:09:25.773811 4710 generic.go:334] "Generic (PLEG): container finished" podID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerID="2ce0e66ffef6781373311b29d24bb6c761b752e473c1f799b06c0610c4cc3c5f" exitCode=0 Nov 28 17:09:25 crc kubenswrapper[4710]: I1128 17:09:25.773839 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" event={"ID":"fbd014d4-ebd1-4399-8fe0-82dea587a945","Type":"ContainerDied","Data":"2ce0e66ffef6781373311b29d24bb6c761b752e473c1f799b06c0610c4cc3c5f"} Nov 28 17:09:28 crc kubenswrapper[4710]: I1128 17:09:28.795439 4710 generic.go:334] "Generic (PLEG): container finished" podID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerID="b2d9ee24f17eff394b4b347106d01de0b3233b36bb196c0a7f69b42c4dab5c2c" exitCode=0 Nov 28 17:09:28 crc kubenswrapper[4710]: I1128 17:09:28.795522 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" event={"ID":"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda","Type":"ContainerDied","Data":"b2d9ee24f17eff394b4b347106d01de0b3233b36bb196c0a7f69b42c4dab5c2c"} Nov 28 17:09:28 crc kubenswrapper[4710]: I1128 17:09:28.799560 4710 generic.go:334] "Generic (PLEG): container finished" podID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerID="a5d5d8040c66a68d4b33e0433cbea50a96de62a9c3ecd10fa2db1c77d4bd807c" exitCode=0 Nov 28 17:09:28 crc kubenswrapper[4710]: I1128 17:09:28.799603 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" event={"ID":"fbd014d4-ebd1-4399-8fe0-82dea587a945","Type":"ContainerDied","Data":"a5d5d8040c66a68d4b33e0433cbea50a96de62a9c3ecd10fa2db1c77d4bd807c"} Nov 28 17:09:29 crc kubenswrapper[4710]: I1128 17:09:29.811016 4710 generic.go:334] "Generic (PLEG): container finished" podID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerID="ceee29e9c12790c542e590dcb11061d6f95935fc2dffa15da3d6cd80ad1566e2" exitCode=0 Nov 28 17:09:29 crc kubenswrapper[4710]: I1128 17:09:29.811104 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" event={"ID":"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda","Type":"ContainerDied","Data":"ceee29e9c12790c542e590dcb11061d6f95935fc2dffa15da3d6cd80ad1566e2"} Nov 28 17:09:29 crc kubenswrapper[4710]: I1128 17:09:29.813628 4710 generic.go:334] "Generic (PLEG): container finished" podID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerID="2d2cace9fad837a39a343a2be86a3b0d241562dc218c834c138706cd347d087a" exitCode=0 Nov 28 17:09:29 crc kubenswrapper[4710]: I1128 17:09:29.813675 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" event={"ID":"fbd014d4-ebd1-4399-8fe0-82dea587a945","Type":"ContainerDied","Data":"2d2cace9fad837a39a343a2be86a3b0d241562dc218c834c138706cd347d087a"} Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.104942 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.112943 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.180868 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm5cc\" (UniqueName: \"kubernetes.io/projected/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-kube-api-access-pm5cc\") pod \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.180924 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxqw8\" (UniqueName: \"kubernetes.io/projected/fbd014d4-ebd1-4399-8fe0-82dea587a945-kube-api-access-wxqw8\") pod \"fbd014d4-ebd1-4399-8fe0-82dea587a945\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.180973 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-bundle\") pod \"fbd014d4-ebd1-4399-8fe0-82dea587a945\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.182098 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-bundle" (OuterVolumeSpecName: "bundle") pod "fbd014d4-ebd1-4399-8fe0-82dea587a945" (UID: "fbd014d4-ebd1-4399-8fe0-82dea587a945"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.187323 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-kube-api-access-pm5cc" (OuterVolumeSpecName: "kube-api-access-pm5cc") pod "dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" (UID: "dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda"). InnerVolumeSpecName "kube-api-access-pm5cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.187451 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbd014d4-ebd1-4399-8fe0-82dea587a945-kube-api-access-wxqw8" (OuterVolumeSpecName: "kube-api-access-wxqw8") pod "fbd014d4-ebd1-4399-8fe0-82dea587a945" (UID: "fbd014d4-ebd1-4399-8fe0-82dea587a945"). InnerVolumeSpecName "kube-api-access-wxqw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.281624 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-bundle\") pod \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.281671 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-util\") pod \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\" (UID: \"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda\") " Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.281691 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-util\") pod \"fbd014d4-ebd1-4399-8fe0-82dea587a945\" (UID: \"fbd014d4-ebd1-4399-8fe0-82dea587a945\") " Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.282679 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-bundle" (OuterVolumeSpecName: "bundle") pod "dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" (UID: "dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.283103 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm5cc\" (UniqueName: \"kubernetes.io/projected/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-kube-api-access-pm5cc\") on node \"crc\" DevicePath \"\"" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.283133 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxqw8\" (UniqueName: \"kubernetes.io/projected/fbd014d4-ebd1-4399-8fe0-82dea587a945-kube-api-access-wxqw8\") on node \"crc\" DevicePath \"\"" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.283145 4710 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.283158 4710 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.291883 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-util" (OuterVolumeSpecName: "util") pod "dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" (UID: "dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.292035 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-util" (OuterVolumeSpecName: "util") pod "fbd014d4-ebd1-4399-8fe0-82dea587a945" (UID: "fbd014d4-ebd1-4399-8fe0-82dea587a945"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.384693 4710 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.384734 4710 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/fbd014d4-ebd1-4399-8fe0-82dea587a945-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.835219 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" event={"ID":"fbd014d4-ebd1-4399-8fe0-82dea587a945","Type":"ContainerDied","Data":"047281b19485fd5415c5d2329a51f9451b45296281c88bf3f115902cf68cf086"} Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.835295 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="047281b19485fd5415c5d2329a51f9451b45296281c88bf3f115902cf68cf086" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.835307 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.839748 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" event={"ID":"dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda","Type":"ContainerDied","Data":"4a28eb000376bbb400aac911c34af0b367d4c679cea851cd3e621e8d972b63bb"} Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.839832 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a28eb000376bbb400aac911c34af0b367d4c679cea851cd3e621e8d972b63bb" Nov 28 17:09:31 crc kubenswrapper[4710]: I1128 17:09:31.839835 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.909823 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr"] Nov 28 17:09:41 crc kubenswrapper[4710]: E1128 17:09:41.910433 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerName="extract" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.910445 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerName="extract" Nov 28 17:09:41 crc kubenswrapper[4710]: E1128 17:09:41.910455 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerName="pull" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.910461 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerName="pull" Nov 28 17:09:41 crc kubenswrapper[4710]: E1128 17:09:41.910478 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerName="util" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.910484 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerName="util" Nov 28 17:09:41 crc kubenswrapper[4710]: E1128 17:09:41.910493 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerName="extract" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.910500 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerName="extract" Nov 28 17:09:41 crc kubenswrapper[4710]: E1128 17:09:41.910507 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerName="pull" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.910513 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerName="pull" Nov 28 17:09:41 crc kubenswrapper[4710]: E1128 17:09:41.910525 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerName="util" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.910530 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerName="util" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.910642 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda" containerName="extract" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.910651 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbd014d4-ebd1-4399-8fe0-82dea587a945" containerName="extract" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.911204 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.912925 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.913740 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-webhook-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.913805 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.913877 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-apiservice-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.913924 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5nqq\" (UniqueName: \"kubernetes.io/projected/13835a45-f211-4e69-bccd-98ef4e8a5594-kube-api-access-c5nqq\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.913961 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/13835a45-f211-4e69-bccd-98ef4e8a5594-manager-config\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.914492 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.914598 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.914724 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.915347 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-lxf86" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.916025 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Nov 28 17:09:41 crc kubenswrapper[4710]: I1128 17:09:41.944563 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr"] Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.014689 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-webhook-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.014739 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.015153 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-apiservice-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.015235 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5nqq\" (UniqueName: \"kubernetes.io/projected/13835a45-f211-4e69-bccd-98ef4e8a5594-kube-api-access-c5nqq\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.015315 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/13835a45-f211-4e69-bccd-98ef4e8a5594-manager-config\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.016335 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/13835a45-f211-4e69-bccd-98ef4e8a5594-manager-config\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.023669 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.024339 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-webhook-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.024595 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13835a45-f211-4e69-bccd-98ef4e8a5594-apiservice-cert\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.033516 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5nqq\" (UniqueName: \"kubernetes.io/projected/13835a45-f211-4e69-bccd-98ef4e8a5594-kube-api-access-c5nqq\") pod \"loki-operator-controller-manager-867dcf9474-l79hr\" (UID: \"13835a45-f211-4e69-bccd-98ef4e8a5594\") " pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.231256 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.665448 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr"] Nov 28 17:09:42 crc kubenswrapper[4710]: I1128 17:09:42.897088 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" event={"ID":"13835a45-f211-4e69-bccd-98ef4e8a5594","Type":"ContainerStarted","Data":"88084e935bcd07305aa6414829893f63de72a03bfc7c60fd6f64082a734e70cd"} Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.594631 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-rrn26"] Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.595342 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-rrn26" Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.597789 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.598044 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.598126 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-dwr2k" Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.608718 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-rrn26"] Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.635379 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wph27\" (UniqueName: \"kubernetes.io/projected/a83b9835-d280-4376-9a2d-b75efd5516d1-kube-api-access-wph27\") pod \"cluster-logging-operator-ff9846bd-rrn26\" (UID: \"a83b9835-d280-4376-9a2d-b75efd5516d1\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-rrn26" Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.735843 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wph27\" (UniqueName: \"kubernetes.io/projected/a83b9835-d280-4376-9a2d-b75efd5516d1-kube-api-access-wph27\") pod \"cluster-logging-operator-ff9846bd-rrn26\" (UID: \"a83b9835-d280-4376-9a2d-b75efd5516d1\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-rrn26" Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.775037 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wph27\" (UniqueName: \"kubernetes.io/projected/a83b9835-d280-4376-9a2d-b75efd5516d1-kube-api-access-wph27\") pod \"cluster-logging-operator-ff9846bd-rrn26\" (UID: \"a83b9835-d280-4376-9a2d-b75efd5516d1\") " pod="openshift-logging/cluster-logging-operator-ff9846bd-rrn26" Nov 28 17:09:43 crc kubenswrapper[4710]: I1128 17:09:43.913256 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-ff9846bd-rrn26" Nov 28 17:09:44 crc kubenswrapper[4710]: I1128 17:09:44.300911 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-ff9846bd-rrn26"] Nov 28 17:09:44 crc kubenswrapper[4710]: W1128 17:09:44.308524 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda83b9835_d280_4376_9a2d_b75efd5516d1.slice/crio-6adfafaa3df40164e55e3fbda783643770c5fc16cae0835de3fb445195cd03b1 WatchSource:0}: Error finding container 6adfafaa3df40164e55e3fbda783643770c5fc16cae0835de3fb445195cd03b1: Status 404 returned error can't find the container with id 6adfafaa3df40164e55e3fbda783643770c5fc16cae0835de3fb445195cd03b1 Nov 28 17:09:44 crc kubenswrapper[4710]: I1128 17:09:44.907228 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-rrn26" event={"ID":"a83b9835-d280-4376-9a2d-b75efd5516d1","Type":"ContainerStarted","Data":"6adfafaa3df40164e55e3fbda783643770c5fc16cae0835de3fb445195cd03b1"} Nov 28 17:09:54 crc kubenswrapper[4710]: I1128 17:09:54.984698 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" event={"ID":"13835a45-f211-4e69-bccd-98ef4e8a5594","Type":"ContainerStarted","Data":"773ef33cefa0eb8852eadf05972c5cb07490add5cbfddda09b173d7a289b67f4"} Nov 28 17:09:54 crc kubenswrapper[4710]: I1128 17:09:54.986572 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-ff9846bd-rrn26" event={"ID":"a83b9835-d280-4376-9a2d-b75efd5516d1","Type":"ContainerStarted","Data":"dae44e8796af12849fd2e21e42b388eedd76095b3c09cbe4c1c3c79e83fd981e"} Nov 28 17:09:55 crc kubenswrapper[4710]: I1128 17:09:55.006752 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-ff9846bd-rrn26" podStartSLOduration=2.144257193 podStartE2EDuration="12.006730092s" podCreationTimestamp="2025-11-28 17:09:43 +0000 UTC" firstStartedPulling="2025-11-28 17:09:44.309980519 +0000 UTC m=+673.568280564" lastFinishedPulling="2025-11-28 17:09:54.172453418 +0000 UTC m=+683.430753463" observedRunningTime="2025-11-28 17:09:55.002085055 +0000 UTC m=+684.260385120" watchObservedRunningTime="2025-11-28 17:09:55.006730092 +0000 UTC m=+684.265030127" Nov 28 17:10:02 crc kubenswrapper[4710]: I1128 17:10:02.042892 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" event={"ID":"13835a45-f211-4e69-bccd-98ef4e8a5594","Type":"ContainerStarted","Data":"d54f41e3f2a30c88662fbc56bade59edd748693d2b6dd7f2744d4a0bd2257958"} Nov 28 17:10:02 crc kubenswrapper[4710]: I1128 17:10:02.043444 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:10:02 crc kubenswrapper[4710]: I1128 17:10:02.048831 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" Nov 28 17:10:02 crc kubenswrapper[4710]: I1128 17:10:02.078944 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-867dcf9474-l79hr" podStartSLOduration=2.428824843 podStartE2EDuration="21.078926517s" podCreationTimestamp="2025-11-28 17:09:41 +0000 UTC" firstStartedPulling="2025-11-28 17:09:42.673137671 +0000 UTC m=+671.931437716" lastFinishedPulling="2025-11-28 17:10:01.323239355 +0000 UTC m=+690.581539390" observedRunningTime="2025-11-28 17:10:02.071955684 +0000 UTC m=+691.330255749" watchObservedRunningTime="2025-11-28 17:10:02.078926517 +0000 UTC m=+691.337226562" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.497667 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.499390 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.501333 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.501712 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.503240 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.696641 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pcdp\" (UniqueName: \"kubernetes.io/projected/95b96405-f834-4a3b-b38c-127970e195fd-kube-api-access-5pcdp\") pod \"minio\" (UID: \"95b96405-f834-4a3b-b38c-127970e195fd\") " pod="minio-dev/minio" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.696764 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1dce8f15-a4d0-4092-829d-5e201ee1b0c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1dce8f15-a4d0-4092-829d-5e201ee1b0c7\") pod \"minio\" (UID: \"95b96405-f834-4a3b-b38c-127970e195fd\") " pod="minio-dev/minio" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.798377 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pcdp\" (UniqueName: \"kubernetes.io/projected/95b96405-f834-4a3b-b38c-127970e195fd-kube-api-access-5pcdp\") pod \"minio\" (UID: \"95b96405-f834-4a3b-b38c-127970e195fd\") " pod="minio-dev/minio" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.798540 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-1dce8f15-a4d0-4092-829d-5e201ee1b0c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1dce8f15-a4d0-4092-829d-5e201ee1b0c7\") pod \"minio\" (UID: \"95b96405-f834-4a3b-b38c-127970e195fd\") " pod="minio-dev/minio" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.803246 4710 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.803293 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-1dce8f15-a4d0-4092-829d-5e201ee1b0c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1dce8f15-a4d0-4092-829d-5e201ee1b0c7\") pod \"minio\" (UID: \"95b96405-f834-4a3b-b38c-127970e195fd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/5ff12e87cecdc522788c9976649dc19be2f6f3009abf4616e61d36957e8bef22/globalmount\"" pod="minio-dev/minio" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.823057 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pcdp\" (UniqueName: \"kubernetes.io/projected/95b96405-f834-4a3b-b38c-127970e195fd-kube-api-access-5pcdp\") pod \"minio\" (UID: \"95b96405-f834-4a3b-b38c-127970e195fd\") " pod="minio-dev/minio" Nov 28 17:10:07 crc kubenswrapper[4710]: I1128 17:10:07.835783 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-1dce8f15-a4d0-4092-829d-5e201ee1b0c7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1dce8f15-a4d0-4092-829d-5e201ee1b0c7\") pod \"minio\" (UID: \"95b96405-f834-4a3b-b38c-127970e195fd\") " pod="minio-dev/minio" Nov 28 17:10:08 crc kubenswrapper[4710]: I1128 17:10:08.136132 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Nov 28 17:10:08 crc kubenswrapper[4710]: I1128 17:10:08.362315 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Nov 28 17:10:09 crc kubenswrapper[4710]: I1128 17:10:09.086955 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"95b96405-f834-4a3b-b38c-127970e195fd","Type":"ContainerStarted","Data":"3e25078db76b49b509f84358383c76c3cde85c5dfba8eebe821f5862325bd738"} Nov 28 17:10:13 crc kubenswrapper[4710]: I1128 17:10:13.125320 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"95b96405-f834-4a3b-b38c-127970e195fd","Type":"ContainerStarted","Data":"87149a8a9deea5abd2881032168874c728d8fcfd600317f108ca3a8bedafd7f6"} Nov 28 17:10:13 crc kubenswrapper[4710]: I1128 17:10:13.146032 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.099988733 podStartE2EDuration="9.146013903s" podCreationTimestamp="2025-11-28 17:10:04 +0000 UTC" firstStartedPulling="2025-11-28 17:10:08.37411071 +0000 UTC m=+697.632410755" lastFinishedPulling="2025-11-28 17:10:12.42013588 +0000 UTC m=+701.678435925" observedRunningTime="2025-11-28 17:10:13.140432075 +0000 UTC m=+702.398732130" watchObservedRunningTime="2025-11-28 17:10:13.146013903 +0000 UTC m=+702.404313968" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.318069 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.319285 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.321660 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.321753 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.323544 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-jh4xj" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.323630 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.324509 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.337079 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.477689 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-h8dlt"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.478620 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.480234 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.480920 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.481200 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.487806 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-h8dlt"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.502538 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.502645 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.502708 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c1e53e-646a-4985-b4a8-d61a238cbad2-config\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.502735 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.502838 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6n79\" (UniqueName: \"kubernetes.io/projected/68c1e53e-646a-4985-b4a8-d61a238cbad2-kube-api-access-t6n79\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.556158 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.557123 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.558702 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.560024 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.569644 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604513 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wfh9\" (UniqueName: \"kubernetes.io/projected/57ccef3e-3095-486c-a76f-733a130bf17d-kube-api-access-4wfh9\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604583 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604603 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604626 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57ccef3e-3095-486c-a76f-733a130bf17d-config\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604644 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604666 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c1e53e-646a-4985-b4a8-d61a238cbad2-config\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604687 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604730 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604750 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604783 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6n79\" (UniqueName: \"kubernetes.io/projected/68c1e53e-646a-4985-b4a8-d61a238cbad2-kube-api-access-t6n79\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.604799 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.605936 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68c1e53e-646a-4985-b4a8-d61a238cbad2-config\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.605971 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-ca-bundle\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.610949 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.611741 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/68c1e53e-646a-4985-b4a8-d61a238cbad2-logging-loki-distributor-http\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.638895 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6n79\" (UniqueName: \"kubernetes.io/projected/68c1e53e-646a-4985-b4a8-d61a238cbad2-kube-api-access-t6n79\") pod \"logging-loki-distributor-76cc67bf56-2nm9w\" (UID: \"68c1e53e-646a-4985-b4a8-d61a238cbad2\") " pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.669246 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-bb554467b-j7bcn"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.670850 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.672627 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.676305 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.676564 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-stssr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.676977 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.677299 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.677369 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.697030 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-bb554467b-6hp6p"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.697976 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707100 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tls-secret\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707152 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707180 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707208 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-rbac\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707229 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tenants\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707260 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56b6c331-58e9-4845-ba94-c16852ca78aa-config\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707295 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707322 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707351 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707371 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tls-secret\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707395 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707423 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707450 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqv28\" (UniqueName: \"kubernetes.io/projected/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-kube-api-access-jqv28\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707480 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707506 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wfh9\" (UniqueName: \"kubernetes.io/projected/57ccef3e-3095-486c-a76f-733a130bf17d-kube-api-access-4wfh9\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707539 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707562 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-lokistack-gateway\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707583 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-lokistack-gateway\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707605 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tenants\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707630 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnl6j\" (UniqueName: \"kubernetes.io/projected/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-kube-api-access-mnl6j\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707651 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-rbac\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707679 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707706 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvf7z\" (UniqueName: \"kubernetes.io/projected/56b6c331-58e9-4845-ba94-c16852ca78aa-kube-api-access-bvf7z\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707726 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707762 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707807 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57ccef3e-3095-486c-a76f-733a130bf17d-config\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.707838 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.709177 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-ca-bundle\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.713470 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57ccef3e-3095-486c-a76f-733a130bf17d-config\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.718272 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-bb554467b-6hp6p"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.719417 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-s3\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.719483 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-querier-grpc\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.734593 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/57ccef3e-3095-486c-a76f-733a130bf17d-logging-loki-querier-http\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.734924 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wfh9\" (UniqueName: \"kubernetes.io/projected/57ccef3e-3095-486c-a76f-733a130bf17d-kube-api-access-4wfh9\") pod \"logging-loki-querier-5895d59bb8-h8dlt\" (UID: \"57ccef3e-3095-486c-a76f-733a130bf17d\") " pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.755227 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-bb554467b-j7bcn"] Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.792973 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.808949 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqv28\" (UniqueName: \"kubernetes.io/projected/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-kube-api-access-jqv28\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809284 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809318 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809340 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-lokistack-gateway\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809366 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-lokistack-gateway\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809385 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tenants\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809408 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnl6j\" (UniqueName: \"kubernetes.io/projected/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-kube-api-access-mnl6j\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809431 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-rbac\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809461 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809481 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvf7z\" (UniqueName: \"kubernetes.io/projected/56b6c331-58e9-4845-ba94-c16852ca78aa-kube-api-access-bvf7z\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809499 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809525 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809542 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tls-secret\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809562 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809585 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809611 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-rbac\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809630 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tenants\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809646 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56b6c331-58e9-4845-ba94-c16852ca78aa-config\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809674 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809693 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.809711 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tls-secret\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: E1128 17:10:17.809849 4710 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 28 17:10:17 crc kubenswrapper[4710]: E1128 17:10:17.809898 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tls-secret podName:6cab4590-1fa6-4fe0-ae00-2c70b93830bd nodeName:}" failed. No retries permitted until 2025-11-28 17:10:18.309880914 +0000 UTC m=+707.568180959 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tls-secret") pod "logging-loki-gateway-bb554467b-6hp6p" (UID: "6cab4590-1fa6-4fe0-ae00-2c70b93830bd") : secret "logging-loki-gateway-http" not found Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.810133 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-ca-bundle\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.810810 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: E1128 17:10:17.810880 4710 secret.go:188] Couldn't get secret openshift-logging/logging-loki-gateway-http: secret "logging-loki-gateway-http" not found Nov 28 17:10:17 crc kubenswrapper[4710]: E1128 17:10:17.810911 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tls-secret podName:f5157b75-08ae-416f-a4d7-1e1f7cb085c4 nodeName:}" failed. No retries permitted until 2025-11-28 17:10:18.310903276 +0000 UTC m=+707.569203321 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-secret" (UniqueName: "kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tls-secret") pod "logging-loki-gateway-bb554467b-j7bcn" (UID: "f5157b75-08ae-416f-a4d7-1e1f7cb085c4") : secret "logging-loki-gateway-http" not found Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.812127 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.812863 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.812885 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-rbac\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.813470 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-lokistack-gateway\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.813529 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-rbac\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.814257 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-lokistack-gateway\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.814504 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56b6c331-58e9-4845-ba94-c16852ca78aa-config\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.814813 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-ca-bundle\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.815942 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tenants\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.816296 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.816494 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.819145 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.822250 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tenants\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.824031 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqv28\" (UniqueName: \"kubernetes.io/projected/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-kube-api-access-jqv28\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.824784 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/56b6c331-58e9-4845-ba94-c16852ca78aa-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.827465 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnl6j\" (UniqueName: \"kubernetes.io/projected/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-kube-api-access-mnl6j\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.829836 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvf7z\" (UniqueName: \"kubernetes.io/projected/56b6c331-58e9-4845-ba94-c16852ca78aa-kube-api-access-bvf7z\") pod \"logging-loki-query-frontend-84558f7c9f-vrpfr\" (UID: \"56b6c331-58e9-4845-ba94-c16852ca78aa\") " pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.870872 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:17 crc kubenswrapper[4710]: I1128 17:10:17.936105 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.301496 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w"] Nov 28 17:10:18 crc kubenswrapper[4710]: W1128 17:10:18.314426 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68c1e53e_646a_4985_b4a8_d61a238cbad2.slice/crio-61b75d385aa2866bcfda979729d56e107328899b2e947cc90277fbe8909edc8e WatchSource:0}: Error finding container 61b75d385aa2866bcfda979729d56e107328899b2e947cc90277fbe8909edc8e: Status 404 returned error can't find the container with id 61b75d385aa2866bcfda979729d56e107328899b2e947cc90277fbe8909edc8e Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.318956 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tls-secret\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.319071 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tls-secret\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.323155 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/f5157b75-08ae-416f-a4d7-1e1f7cb085c4-tls-secret\") pod \"logging-loki-gateway-bb554467b-j7bcn\" (UID: \"f5157b75-08ae-416f-a4d7-1e1f7cb085c4\") " pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.323738 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/6cab4590-1fa6-4fe0-ae00-2c70b93830bd-tls-secret\") pod \"logging-loki-gateway-bb554467b-6hp6p\" (UID: \"6cab4590-1fa6-4fe0-ae00-2c70b93830bd\") " pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.358706 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-5895d59bb8-h8dlt"] Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.362073 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:18 crc kubenswrapper[4710]: W1128 17:10:18.362791 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57ccef3e_3095_486c_a76f_733a130bf17d.slice/crio-60c879f12b0aaab8d611014b9ab62b3265321237f21f72eceaab45045d346923 WatchSource:0}: Error finding container 60c879f12b0aaab8d611014b9ab62b3265321237f21f72eceaab45045d346923: Status 404 returned error can't find the container with id 60c879f12b0aaab8d611014b9ab62b3265321237f21f72eceaab45045d346923 Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.445842 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr"] Nov 28 17:10:18 crc kubenswrapper[4710]: W1128 17:10:18.454284 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56b6c331_58e9_4845_ba94_c16852ca78aa.slice/crio-eea43e973275204c641c5d5f72e24be76bb0505422b67abea57e2ccb618250fb WatchSource:0}: Error finding container eea43e973275204c641c5d5f72e24be76bb0505422b67abea57e2ccb618250fb: Status 404 returned error can't find the container with id eea43e973275204c641c5d5f72e24be76bb0505422b67abea57e2ccb618250fb Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.486023 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.486949 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.491508 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.492137 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.500234 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.538353 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.539648 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.541779 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.542475 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.548200 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.595669 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.620554 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.621884 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.623891 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8493d77b-3417-447b-88a2-f318cfd8de23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8493d77b-3417-447b-88a2-f318cfd8de23\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.623959 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.623984 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.624040 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a11c8ef6-d35a-44a9-814e-b4707e125f10\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a11c8ef6-d35a-44a9-814e-b4707e125f10\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.625854 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.627357 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.627429 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.627494 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b308845-0e6e-41e0-9ca9-f04b09a31211-config\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.627601 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.627630 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rgn2\" (UniqueName: \"kubernetes.io/projected/0b308845-0e6e-41e0-9ca9-f04b09a31211-kube-api-access-9rgn2\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.643415 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.729604 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.729987 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rgn2\" (UniqueName: \"kubernetes.io/projected/0b308845-0e6e-41e0-9ca9-f04b09a31211-kube-api-access-9rgn2\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730057 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-798fc\" (UniqueName: \"kubernetes.io/projected/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-kube-api-access-798fc\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730117 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730149 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730193 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730211 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-202b2b3c-4d24-4158-912e-bac6afd2e739\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-202b2b3c-4d24-4158-912e-bac6afd2e739\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730229 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730275 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8493d77b-3417-447b-88a2-f318cfd8de23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8493d77b-3417-447b-88a2-f318cfd8de23\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730298 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cf17bda2-07a4-4757-b284-723da24b8048\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf17bda2-07a4-4757-b284-723da24b8048\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730318 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730356 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zqzv\" (UniqueName: \"kubernetes.io/projected/1c46c2c6-fb09-4289-a38d-ce46f239b830-kube-api-access-2zqzv\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730379 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730427 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c46c2c6-fb09-4289-a38d-ce46f239b830-config\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730464 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730492 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a11c8ef6-d35a-44a9-814e-b4707e125f10\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a11c8ef6-d35a-44a9-814e-b4707e125f10\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730518 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730556 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730578 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b308845-0e6e-41e0-9ca9-f04b09a31211-config\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730651 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730677 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.730718 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.732567 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.732622 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b308845-0e6e-41e0-9ca9-f04b09a31211-config\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.735609 4710 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.735660 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8493d77b-3417-447b-88a2-f318cfd8de23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8493d77b-3417-447b-88a2-f318cfd8de23\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/fcc63e33a234cdfe3808f5a3b85abeca8c646188686b5e3ec096438a0174ecc2/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.736453 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.736489 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.737326 4710 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.737344 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a11c8ef6-d35a-44a9-814e-b4707e125f10\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a11c8ef6-d35a-44a9-814e-b4707e125f10\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/794fa5e2df41583c0cd223b13f9287f10f16aece1f8b4afef552ef1a5df6c3c6/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.745671 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0b308845-0e6e-41e0-9ca9-f04b09a31211-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.749523 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rgn2\" (UniqueName: \"kubernetes.io/projected/0b308845-0e6e-41e0-9ca9-f04b09a31211-kube-api-access-9rgn2\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.767368 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a11c8ef6-d35a-44a9-814e-b4707e125f10\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a11c8ef6-d35a-44a9-814e-b4707e125f10\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.772381 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8493d77b-3417-447b-88a2-f318cfd8de23\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8493d77b-3417-447b-88a2-f318cfd8de23\") pod \"logging-loki-ingester-0\" (UID: \"0b308845-0e6e-41e0-9ca9-f04b09a31211\") " pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.795322 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-bb554467b-6hp6p"] Nov 28 17:10:18 crc kubenswrapper[4710]: W1128 17:10:18.801915 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cab4590_1fa6_4fe0_ae00_2c70b93830bd.slice/crio-46a8069886d4b17ad78f323f89b9bdaa8c06c19141e1adcf3bf44b39b1ff0d89 WatchSource:0}: Error finding container 46a8069886d4b17ad78f323f89b9bdaa8c06c19141e1adcf3bf44b39b1ff0d89: Status 404 returned error can't find the container with id 46a8069886d4b17ad78f323f89b9bdaa8c06c19141e1adcf3bf44b39b1ff0d89 Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.808106 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.831866 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.831925 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.831980 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-798fc\" (UniqueName: \"kubernetes.io/projected/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-kube-api-access-798fc\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832034 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832084 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832115 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832139 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-202b2b3c-4d24-4158-912e-bac6afd2e739\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-202b2b3c-4d24-4158-912e-bac6afd2e739\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832180 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832242 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cf17bda2-07a4-4757-b284-723da24b8048\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf17bda2-07a4-4757-b284-723da24b8048\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832278 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zqzv\" (UniqueName: \"kubernetes.io/projected/1c46c2c6-fb09-4289-a38d-ce46f239b830-kube-api-access-2zqzv\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832323 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c46c2c6-fb09-4289-a38d-ce46f239b830-config\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832380 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832422 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.832495 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.834542 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.835041 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-config\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.835277 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1c46c2c6-fb09-4289-a38d-ce46f239b830-config\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.835641 4710 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.835674 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.835693 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-202b2b3c-4d24-4158-912e-bac6afd2e739\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-202b2b3c-4d24-4158-912e-bac6afd2e739\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4fd0ccefb6bd37a5ca451c7a1dc7a39e2b641c2db32e9f94335660a18ade972b/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.836010 4710 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.836094 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cf17bda2-07a4-4757-b284-723da24b8048\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf17bda2-07a4-4757-b284-723da24b8048\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0ed0ae22486d08f66581aeb72521755e439d122622a9b18026346876fd45bac7/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.836681 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.836694 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.838877 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.840216 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/1c46c2c6-fb09-4289-a38d-ce46f239b830-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.840800 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.845145 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.850159 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-798fc\" (UniqueName: \"kubernetes.io/projected/0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d-kube-api-access-798fc\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.853342 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zqzv\" (UniqueName: \"kubernetes.io/projected/1c46c2c6-fb09-4289-a38d-ce46f239b830-kube-api-access-2zqzv\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.861637 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-202b2b3c-4d24-4158-912e-bac6afd2e739\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-202b2b3c-4d24-4158-912e-bac6afd2e739\") pod \"logging-loki-index-gateway-0\" (UID: \"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d\") " pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.864188 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cf17bda2-07a4-4757-b284-723da24b8048\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cf17bda2-07a4-4757-b284-723da24b8048\") pod \"logging-loki-compactor-0\" (UID: \"1c46c2c6-fb09-4289-a38d-ce46f239b830\") " pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:18 crc kubenswrapper[4710]: I1128 17:10:18.969191 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.004502 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.023066 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-bb554467b-j7bcn"] Nov 28 17:10:19 crc kubenswrapper[4710]: W1128 17:10:19.027559 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5157b75_08ae_416f_a4d7_1e1f7cb085c4.slice/crio-9b23e7c87f50634bb3960d89e611cbf904877689a09dc5fc1621cce16b35bfd6 WatchSource:0}: Error finding container 9b23e7c87f50634bb3960d89e611cbf904877689a09dc5fc1621cce16b35bfd6: Status 404 returned error can't find the container with id 9b23e7c87f50634bb3960d89e611cbf904877689a09dc5fc1621cce16b35bfd6 Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.153214 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.169920 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" event={"ID":"6cab4590-1fa6-4fe0-ae00-2c70b93830bd","Type":"ContainerStarted","Data":"46a8069886d4b17ad78f323f89b9bdaa8c06c19141e1adcf3bf44b39b1ff0d89"} Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.172855 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" event={"ID":"57ccef3e-3095-486c-a76f-733a130bf17d","Type":"ContainerStarted","Data":"60c879f12b0aaab8d611014b9ab62b3265321237f21f72eceaab45045d346923"} Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.174359 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" event={"ID":"56b6c331-58e9-4845-ba94-c16852ca78aa","Type":"ContainerStarted","Data":"eea43e973275204c641c5d5f72e24be76bb0505422b67abea57e2ccb618250fb"} Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.175325 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" event={"ID":"68c1e53e-646a-4985-b4a8-d61a238cbad2","Type":"ContainerStarted","Data":"61b75d385aa2866bcfda979729d56e107328899b2e947cc90277fbe8909edc8e"} Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.176156 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"0b308845-0e6e-41e0-9ca9-f04b09a31211","Type":"ContainerStarted","Data":"a2ebc98d38183d1551abd067ecf97865802ff73459d97523d49cd9c18ff1208a"} Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.177135 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" event={"ID":"f5157b75-08ae-416f-a4d7-1e1f7cb085c4","Type":"ContainerStarted","Data":"9b23e7c87f50634bb3960d89e611cbf904877689a09dc5fc1621cce16b35bfd6"} Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.405804 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Nov 28 17:10:19 crc kubenswrapper[4710]: W1128 17:10:19.410804 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d4ff66e_4d49_4dc9_9ef9_ae4701c5ff2d.slice/crio-dad82c030a18d294c6a6111c481b2dbe8b55320813ea69862fbfaf944931d520 WatchSource:0}: Error finding container dad82c030a18d294c6a6111c481b2dbe8b55320813ea69862fbfaf944931d520: Status 404 returned error can't find the container with id dad82c030a18d294c6a6111c481b2dbe8b55320813ea69862fbfaf944931d520 Nov 28 17:10:19 crc kubenswrapper[4710]: I1128 17:10:19.527545 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Nov 28 17:10:19 crc kubenswrapper[4710]: W1128 17:10:19.531403 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c46c2c6_fb09_4289_a38d_ce46f239b830.slice/crio-f6bc5a1fbbc6c422b59e62895ac444991e3a3ecac60535b6004ba341b5276fd8 WatchSource:0}: Error finding container f6bc5a1fbbc6c422b59e62895ac444991e3a3ecac60535b6004ba341b5276fd8: Status 404 returned error can't find the container with id f6bc5a1fbbc6c422b59e62895ac444991e3a3ecac60535b6004ba341b5276fd8 Nov 28 17:10:20 crc kubenswrapper[4710]: I1128 17:10:20.188341 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"1c46c2c6-fb09-4289-a38d-ce46f239b830","Type":"ContainerStarted","Data":"f6bc5a1fbbc6c422b59e62895ac444991e3a3ecac60535b6004ba341b5276fd8"} Nov 28 17:10:20 crc kubenswrapper[4710]: I1128 17:10:20.189255 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d","Type":"ContainerStarted","Data":"dad82c030a18d294c6a6111c481b2dbe8b55320813ea69862fbfaf944931d520"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.243867 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"0b308845-0e6e-41e0-9ca9-f04b09a31211","Type":"ContainerStarted","Data":"8b517aa63c3db26b569dedf3d7ffd451f6744ef149d233ec4d0a53a02b498bf4"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.244367 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.246717 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"1c46c2c6-fb09-4289-a38d-ce46f239b830","Type":"ContainerStarted","Data":"1a78376c28cc15fe2e3d580e01f4749f945bfea3afd975a4093842ecae3bdb38"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.246943 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.249056 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" event={"ID":"f5157b75-08ae-416f-a4d7-1e1f7cb085c4","Type":"ContainerStarted","Data":"378a8102f18a8c1685e3629ca0aba64a79abde966ca1f6e60cdde4daedc7fe7a"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.251370 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" event={"ID":"6cab4590-1fa6-4fe0-ae00-2c70b93830bd","Type":"ContainerStarted","Data":"404cc1c5132087040af593b12ed5cd8b991d4113c6442a80f9e3253e6e54302f"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.252993 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" event={"ID":"57ccef3e-3095-486c-a76f-733a130bf17d","Type":"ContainerStarted","Data":"002f7eb2b0075d88d8a15d3a6069e04455e30199f4156fca086217964d9af523"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.253134 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.254578 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d","Type":"ContainerStarted","Data":"c3eabaa407d8fc28f64f65f775ab5bd574283203affa25fd17e8f08d4025756f"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.254681 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.257694 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" event={"ID":"56b6c331-58e9-4845-ba94-c16852ca78aa","Type":"ContainerStarted","Data":"66d5a78c494c4721415e49dbcecc41ccfa5f2c77eca7b50398657d517ba1369c"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.258484 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.262414 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=2.872244371 podStartE2EDuration="10.262401321s" podCreationTimestamp="2025-11-28 17:10:17 +0000 UTC" firstStartedPulling="2025-11-28 17:10:19.008009233 +0000 UTC m=+708.266309278" lastFinishedPulling="2025-11-28 17:10:26.398166163 +0000 UTC m=+715.656466228" observedRunningTime="2025-11-28 17:10:27.25888923 +0000 UTC m=+716.517189285" watchObservedRunningTime="2025-11-28 17:10:27.262401321 +0000 UTC m=+716.520701366" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.263220 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" event={"ID":"68c1e53e-646a-4985-b4a8-d61a238cbad2","Type":"ContainerStarted","Data":"579eba5d4a7c4c36b2edbb637b595999a04a1e483e8a29234b1ad72ba0f1c72c"} Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.263941 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.285508 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" podStartSLOduration=2.3337952570000002 podStartE2EDuration="10.285490216s" podCreationTimestamp="2025-11-28 17:10:17 +0000 UTC" firstStartedPulling="2025-11-28 17:10:18.37035279 +0000 UTC m=+707.628652855" lastFinishedPulling="2025-11-28 17:10:26.322047769 +0000 UTC m=+715.580347814" observedRunningTime="2025-11-28 17:10:27.278237336 +0000 UTC m=+716.536537401" watchObservedRunningTime="2025-11-28 17:10:27.285490216 +0000 UTC m=+716.543790261" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.298157 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.437023465 podStartE2EDuration="10.29814342s" podCreationTimestamp="2025-11-28 17:10:17 +0000 UTC" firstStartedPulling="2025-11-28 17:10:19.53709653 +0000 UTC m=+708.795396565" lastFinishedPulling="2025-11-28 17:10:26.398216475 +0000 UTC m=+715.656516520" observedRunningTime="2025-11-28 17:10:27.296246429 +0000 UTC m=+716.554546474" watchObservedRunningTime="2025-11-28 17:10:27.29814342 +0000 UTC m=+716.556443465" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.317857 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" podStartSLOduration=2.37877072 podStartE2EDuration="10.317839647s" podCreationTimestamp="2025-11-28 17:10:17 +0000 UTC" firstStartedPulling="2025-11-28 17:10:18.456522284 +0000 UTC m=+707.714822329" lastFinishedPulling="2025-11-28 17:10:26.395591201 +0000 UTC m=+715.653891256" observedRunningTime="2025-11-28 17:10:27.312907469 +0000 UTC m=+716.571207514" watchObservedRunningTime="2025-11-28 17:10:27.317839647 +0000 UTC m=+716.576139692" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.335368 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.350641964 podStartE2EDuration="10.335353084s" podCreationTimestamp="2025-11-28 17:10:17 +0000 UTC" firstStartedPulling="2025-11-28 17:10:19.413000598 +0000 UTC m=+708.671300643" lastFinishedPulling="2025-11-28 17:10:26.397711718 +0000 UTC m=+715.656011763" observedRunningTime="2025-11-28 17:10:27.331097839 +0000 UTC m=+716.589397884" watchObservedRunningTime="2025-11-28 17:10:27.335353084 +0000 UTC m=+716.593653129" Nov 28 17:10:27 crc kubenswrapper[4710]: I1128 17:10:27.348055 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" podStartSLOduration=2.268608901 podStartE2EDuration="10.348035908s" podCreationTimestamp="2025-11-28 17:10:17 +0000 UTC" firstStartedPulling="2025-11-28 17:10:18.318148077 +0000 UTC m=+707.576448142" lastFinishedPulling="2025-11-28 17:10:26.397575094 +0000 UTC m=+715.655875149" observedRunningTime="2025-11-28 17:10:27.346248211 +0000 UTC m=+716.604548266" watchObservedRunningTime="2025-11-28 17:10:27.348035908 +0000 UTC m=+716.606335953" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.285136 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" event={"ID":"f5157b75-08ae-416f-a4d7-1e1f7cb085c4","Type":"ContainerStarted","Data":"dbe7af07ed4ea98d72a254d026048bff675c0ab3abdf87f0b21a1d5eca44a20c"} Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.285898 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.285917 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.287061 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" event={"ID":"6cab4590-1fa6-4fe0-ae00-2c70b93830bd","Type":"ContainerStarted","Data":"eb7e25e9e3260d982d4ec134268e1cc19f83b2ed3faa779c615b0899d615943a"} Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.287219 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.287279 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.294654 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.295145 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.297936 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.312365 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.317494 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-bb554467b-j7bcn" podStartSLOduration=3.006086867 podStartE2EDuration="13.317476829s" podCreationTimestamp="2025-11-28 17:10:17 +0000 UTC" firstStartedPulling="2025-11-28 17:10:19.030694486 +0000 UTC m=+708.288994531" lastFinishedPulling="2025-11-28 17:10:29.342084448 +0000 UTC m=+718.600384493" observedRunningTime="2025-11-28 17:10:30.314824335 +0000 UTC m=+719.573124390" watchObservedRunningTime="2025-11-28 17:10:30.317476829 +0000 UTC m=+719.575776874" Nov 28 17:10:30 crc kubenswrapper[4710]: I1128 17:10:30.334355 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-bb554467b-6hp6p" podStartSLOduration=2.7951312489999998 podStartE2EDuration="13.334336894s" podCreationTimestamp="2025-11-28 17:10:17 +0000 UTC" firstStartedPulling="2025-11-28 17:10:18.803845273 +0000 UTC m=+708.062145318" lastFinishedPulling="2025-11-28 17:10:29.343050928 +0000 UTC m=+718.601350963" observedRunningTime="2025-11-28 17:10:30.333456576 +0000 UTC m=+719.591756621" watchObservedRunningTime="2025-11-28 17:10:30.334336894 +0000 UTC m=+719.592636939" Nov 28 17:10:43 crc kubenswrapper[4710]: I1128 17:10:43.343714 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:10:43 crc kubenswrapper[4710]: I1128 17:10:43.345888 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:10:47 crc kubenswrapper[4710]: I1128 17:10:47.800257 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-5895d59bb8-h8dlt" Nov 28 17:10:47 crc kubenswrapper[4710]: I1128 17:10:47.897927 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-84558f7c9f-vrpfr" Nov 28 17:10:47 crc kubenswrapper[4710]: I1128 17:10:47.942711 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-76cc67bf56-2nm9w" Nov 28 17:10:48 crc kubenswrapper[4710]: I1128 17:10:48.817302 4710 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Nov 28 17:10:48 crc kubenswrapper[4710]: I1128 17:10:48.817672 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0b308845-0e6e-41e0-9ca9-f04b09a31211" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 17:10:48 crc kubenswrapper[4710]: I1128 17:10:48.983075 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Nov 28 17:10:49 crc kubenswrapper[4710]: I1128 17:10:49.161753 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Nov 28 17:10:58 crc kubenswrapper[4710]: I1128 17:10:58.814963 4710 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 28 17:10:58 crc kubenswrapper[4710]: I1128 17:10:58.815373 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0b308845-0e6e-41e0-9ca9-f04b09a31211" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 17:11:08 crc kubenswrapper[4710]: I1128 17:11:08.812401 4710 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Nov 28 17:11:08 crc kubenswrapper[4710]: I1128 17:11:08.812933 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="0b308845-0e6e-41e0-9ca9-f04b09a31211" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Nov 28 17:11:13 crc kubenswrapper[4710]: I1128 17:11:13.343573 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:11:13 crc kubenswrapper[4710]: I1128 17:11:13.344145 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:11:18 crc kubenswrapper[4710]: I1128 17:11:18.818847 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Nov 28 17:11:22 crc kubenswrapper[4710]: I1128 17:11:22.026233 4710 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.675242 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-76nnx"] Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.677372 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.680665 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.681468 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.682621 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.683176 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-j7n8m" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.684635 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.685875 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-76nnx"] Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.695616 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.782716 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.782784 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-trusted-ca\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.782822 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3dc7511-580f-44e2-bf54-831025400013-datadir\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.782991 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3dc7511-580f-44e2-bf54-831025400013-tmp\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.783070 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpkrw\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-kube-api-access-qpkrw\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.783138 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config-openshift-service-cacrt\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.783167 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-entrypoint\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.783296 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.783330 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-sa-token\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.783368 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-token\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.783452 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.811490 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-76nnx"] Nov 28 17:11:38 crc kubenswrapper[4710]: E1128 17:11:38.812045 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-qpkrw metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-76nnx" podUID="d3dc7511-580f-44e2-bf54-831025400013" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885058 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-token\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885352 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885443 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885520 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-trusted-ca\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885593 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3dc7511-580f-44e2-bf54-831025400013-datadir\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885675 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3dc7511-580f-44e2-bf54-831025400013-tmp\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885746 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpkrw\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-kube-api-access-qpkrw\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885885 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config-openshift-service-cacrt\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885956 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-entrypoint\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.885988 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3dc7511-580f-44e2-bf54-831025400013-datadir\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: E1128 17:11:38.885552 4710 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.886271 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.886138 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-sa-token\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: E1128 17:11:38.886333 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics podName:d3dc7511-580f-44e2-bf54-831025400013 nodeName:}" failed. No retries permitted until 2025-11-28 17:11:39.386308124 +0000 UTC m=+788.644608349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics") pod "collector-76nnx" (UID: "d3dc7511-580f-44e2-bf54-831025400013") : secret "collector-metrics" not found Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.886361 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: E1128 17:11:38.886467 4710 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Nov 28 17:11:38 crc kubenswrapper[4710]: E1128 17:11:38.886524 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver podName:d3dc7511-580f-44e2-bf54-831025400013 nodeName:}" failed. No retries permitted until 2025-11-28 17:11:39.38650872 +0000 UTC m=+788.644808785 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver") pod "collector-76nnx" (UID: "d3dc7511-580f-44e2-bf54-831025400013") : secret "collector-syslog-receiver" not found Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.886551 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-trusted-ca\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.886594 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config-openshift-service-cacrt\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.887554 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-entrypoint\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.890617 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3dc7511-580f-44e2-bf54-831025400013-tmp\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.902427 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-token\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.902654 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpkrw\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-kube-api-access-qpkrw\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:38 crc kubenswrapper[4710]: I1128 17:11:38.903593 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-sa-token\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.393379 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.393862 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.400499 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.400521 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics\") pod \"collector-76nnx\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " pod="openshift-logging/collector-76nnx" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.797716 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-76nnx" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.809509 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-76nnx" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.900071 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3dc7511-580f-44e2-bf54-831025400013-tmp\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.900458 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpkrw\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-kube-api-access-qpkrw\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.900609 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-sa-token\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.900724 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.900889 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3dc7511-580f-44e2-bf54-831025400013-datadir\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.901042 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.901238 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.901386 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-trusted-ca\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.901556 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config-openshift-service-cacrt\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.901697 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-entrypoint\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.901879 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-token\") pod \"d3dc7511-580f-44e2-bf54-831025400013\" (UID: \"d3dc7511-580f-44e2-bf54-831025400013\") " Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.901072 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3dc7511-580f-44e2-bf54-831025400013-datadir" (OuterVolumeSpecName: "datadir") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.902165 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config" (OuterVolumeSpecName: "config") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.902293 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.902386 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.902444 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.903229 4710 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/d3dc7511-580f-44e2-bf54-831025400013-datadir\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.903285 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.903313 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.903346 4710 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.903375 4710 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/d3dc7511-580f-44e2-bf54-831025400013-entrypoint\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.906018 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-kube-api-access-qpkrw" (OuterVolumeSpecName: "kube-api-access-qpkrw") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "kube-api-access-qpkrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.906401 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.907185 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-sa-token" (OuterVolumeSpecName: "sa-token") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.907633 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics" (OuterVolumeSpecName: "metrics") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.908067 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3dc7511-580f-44e2-bf54-831025400013-tmp" (OuterVolumeSpecName: "tmp") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:11:39 crc kubenswrapper[4710]: I1128 17:11:39.909574 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-token" (OuterVolumeSpecName: "collector-token") pod "d3dc7511-580f-44e2-bf54-831025400013" (UID: "d3dc7511-580f-44e2-bf54-831025400013"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.004322 4710 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-token\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.004361 4710 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3dc7511-580f-44e2-bf54-831025400013-tmp\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.004402 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpkrw\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-kube-api-access-qpkrw\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.004416 4710 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/d3dc7511-580f-44e2-bf54-831025400013-sa-token\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.004427 4710 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-metrics\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.004438 4710 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/d3dc7511-580f-44e2-bf54-831025400013-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.802928 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-76nnx" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.864147 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-76nnx"] Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.869743 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-76nnx"] Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.880128 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-rnjct"] Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.882061 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rnjct"] Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.882215 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rnjct" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.887790 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.887928 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.888100 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-j7n8m" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.888185 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.888306 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Nov 28 17:11:40 crc kubenswrapper[4710]: I1128 17:11:40.894637 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018487 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-metrics\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018548 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-entrypoint\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018575 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-collector-token\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018620 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-trusted-ca\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018643 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-config\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018693 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-collector-syslog-receiver\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018721 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l4zb\" (UniqueName: \"kubernetes.io/projected/72491cd2-2224-4420-a937-a15f5f22e035-kube-api-access-9l4zb\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018748 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-config-openshift-service-cacrt\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.018817 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/72491cd2-2224-4420-a937-a15f5f22e035-tmp\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.019059 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/72491cd2-2224-4420-a937-a15f5f22e035-sa-token\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.019147 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/72491cd2-2224-4420-a937-a15f5f22e035-datadir\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.120617 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-trusted-ca\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.120740 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-config\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.120947 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-collector-syslog-receiver\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121111 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l4zb\" (UniqueName: \"kubernetes.io/projected/72491cd2-2224-4420-a937-a15f5f22e035-kube-api-access-9l4zb\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121229 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-config-openshift-service-cacrt\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121449 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/72491cd2-2224-4420-a937-a15f5f22e035-tmp\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121522 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/72491cd2-2224-4420-a937-a15f5f22e035-sa-token\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121593 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/72491cd2-2224-4420-a937-a15f5f22e035-datadir\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121645 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-metrics\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121701 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-trusted-ca\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121726 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-entrypoint\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121818 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-collector-token\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.121824 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/72491cd2-2224-4420-a937-a15f5f22e035-datadir\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.124526 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/72491cd2-2224-4420-a937-a15f5f22e035-tmp\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.124619 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-entrypoint\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.124959 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-config-openshift-service-cacrt\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.125847 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/72491cd2-2224-4420-a937-a15f5f22e035-config\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.126257 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-collector-syslog-receiver\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.127689 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-collector-token\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.128789 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/72491cd2-2224-4420-a937-a15f5f22e035-metrics\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.143906 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/72491cd2-2224-4420-a937-a15f5f22e035-sa-token\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.149544 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l4zb\" (UniqueName: \"kubernetes.io/projected/72491cd2-2224-4420-a937-a15f5f22e035-kube-api-access-9l4zb\") pod \"collector-rnjct\" (UID: \"72491cd2-2224-4420-a937-a15f5f22e035\") " pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.158618 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3dc7511-580f-44e2-bf54-831025400013" path="/var/lib/kubelet/pods/d3dc7511-580f-44e2-bf54-831025400013/volumes" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.209580 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-rnjct" Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.716586 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-rnjct"] Nov 28 17:11:41 crc kubenswrapper[4710]: I1128 17:11:41.811311 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-rnjct" event={"ID":"72491cd2-2224-4420-a937-a15f5f22e035","Type":"ContainerStarted","Data":"3c5577359d69e9173b4c2e75ddd8f9b38c181969e26d0878bdecbf652c40d1de"} Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.346156 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.346481 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.346525 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.347271 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6d85207656f6d2601d2bdd070cb40b8f4df58d52a8f16d4308eea97c4776e87"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.347375 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://c6d85207656f6d2601d2bdd070cb40b8f4df58d52a8f16d4308eea97c4776e87" gracePeriod=600 Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.825715 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="c6d85207656f6d2601d2bdd070cb40b8f4df58d52a8f16d4308eea97c4776e87" exitCode=0 Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.825813 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"c6d85207656f6d2601d2bdd070cb40b8f4df58d52a8f16d4308eea97c4776e87"} Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.826051 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"739dbee0820156a6554c32a8264c90cabd429c04c249177fc7347cfeddb379ed"} Nov 28 17:11:43 crc kubenswrapper[4710]: I1128 17:11:43.826072 4710 scope.go:117] "RemoveContainer" containerID="503a90972a7301443a4a3341e128be8edb746f7d27a04b1ad0ecedf9ae666272" Nov 28 17:11:49 crc kubenswrapper[4710]: I1128 17:11:49.882946 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-rnjct" event={"ID":"72491cd2-2224-4420-a937-a15f5f22e035","Type":"ContainerStarted","Data":"fe4faf350a765b409ae7a5307610dd6a92ef4440196c6f3e124f1fcc08452fdd"} Nov 28 17:11:49 crc kubenswrapper[4710]: I1128 17:11:49.903536 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-rnjct" podStartSLOduration=2.21886704 podStartE2EDuration="9.903519207s" podCreationTimestamp="2025-11-28 17:11:40 +0000 UTC" firstStartedPulling="2025-11-28 17:11:41.725121055 +0000 UTC m=+790.983421120" lastFinishedPulling="2025-11-28 17:11:49.409773242 +0000 UTC m=+798.668073287" observedRunningTime="2025-11-28 17:11:49.901174662 +0000 UTC m=+799.159474707" watchObservedRunningTime="2025-11-28 17:11:49.903519207 +0000 UTC m=+799.161819252" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.766047 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2"] Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.768233 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.771077 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.781741 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slpg7\" (UniqueName: \"kubernetes.io/projected/776c25fb-769e-45f1-bbdd-1ef457e29908-kube-api-access-slpg7\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.782105 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.782336 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.784638 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2"] Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.883701 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.883837 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slpg7\" (UniqueName: \"kubernetes.io/projected/776c25fb-769e-45f1-bbdd-1ef457e29908-kube-api-access-slpg7\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.883869 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.884377 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.884490 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:12 crc kubenswrapper[4710]: I1128 17:12:12.908073 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slpg7\" (UniqueName: \"kubernetes.io/projected/776c25fb-769e-45f1-bbdd-1ef457e29908-kube-api-access-slpg7\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:13 crc kubenswrapper[4710]: I1128 17:12:13.088503 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:13 crc kubenswrapper[4710]: I1128 17:12:13.375624 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2"] Nov 28 17:12:13 crc kubenswrapper[4710]: W1128 17:12:13.392419 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod776c25fb_769e_45f1_bbdd_1ef457e29908.slice/crio-f8c146243df2e1f3f850f954a6ebb1c71eb2820468870e09de84a6456f19be54 WatchSource:0}: Error finding container f8c146243df2e1f3f850f954a6ebb1c71eb2820468870e09de84a6456f19be54: Status 404 returned error can't find the container with id f8c146243df2e1f3f850f954a6ebb1c71eb2820468870e09de84a6456f19be54 Nov 28 17:12:14 crc kubenswrapper[4710]: I1128 17:12:14.056788 4710 generic.go:334] "Generic (PLEG): container finished" podID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerID="f44e8a8bd3041a34cd9ae0c7592b10dff9fa8308d123cdf6c94021aba1821242" exitCode=0 Nov 28 17:12:14 crc kubenswrapper[4710]: I1128 17:12:14.056829 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" event={"ID":"776c25fb-769e-45f1-bbdd-1ef457e29908","Type":"ContainerDied","Data":"f44e8a8bd3041a34cd9ae0c7592b10dff9fa8308d123cdf6c94021aba1821242"} Nov 28 17:12:14 crc kubenswrapper[4710]: I1128 17:12:14.056851 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" event={"ID":"776c25fb-769e-45f1-bbdd-1ef457e29908","Type":"ContainerStarted","Data":"f8c146243df2e1f3f850f954a6ebb1c71eb2820468870e09de84a6456f19be54"} Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.120732 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m58dr"] Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.122891 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.135723 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m58dr"] Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.320084 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-utilities\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.320139 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6h4j\" (UniqueName: \"kubernetes.io/projected/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-kube-api-access-j6h4j\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.320221 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-catalog-content\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.422286 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-catalog-content\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.422393 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-utilities\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.422436 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6h4j\" (UniqueName: \"kubernetes.io/projected/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-kube-api-access-j6h4j\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.422860 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-utilities\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.422958 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-catalog-content\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.444234 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6h4j\" (UniqueName: \"kubernetes.io/projected/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-kube-api-access-j6h4j\") pod \"redhat-operators-m58dr\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:15 crc kubenswrapper[4710]: I1128 17:12:15.739616 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:16 crc kubenswrapper[4710]: I1128 17:12:16.244944 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m58dr"] Nov 28 17:12:16 crc kubenswrapper[4710]: W1128 17:12:16.247673 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49d56ff0_aedf_42dd_9bcd_1fba3039d5a9.slice/crio-f5dbde91f6e7181bfdb32b44b2d411157c075587f1449ff72cb038b9577a7d10 WatchSource:0}: Error finding container f5dbde91f6e7181bfdb32b44b2d411157c075587f1449ff72cb038b9577a7d10: Status 404 returned error can't find the container with id f5dbde91f6e7181bfdb32b44b2d411157c075587f1449ff72cb038b9577a7d10 Nov 28 17:12:17 crc kubenswrapper[4710]: I1128 17:12:17.084513 4710 generic.go:334] "Generic (PLEG): container finished" podID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerID="89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e" exitCode=0 Nov 28 17:12:17 crc kubenswrapper[4710]: I1128 17:12:17.084620 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58dr" event={"ID":"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9","Type":"ContainerDied","Data":"89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e"} Nov 28 17:12:17 crc kubenswrapper[4710]: I1128 17:12:17.084675 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58dr" event={"ID":"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9","Type":"ContainerStarted","Data":"f5dbde91f6e7181bfdb32b44b2d411157c075587f1449ff72cb038b9577a7d10"} Nov 28 17:12:17 crc kubenswrapper[4710]: I1128 17:12:17.086960 4710 generic.go:334] "Generic (PLEG): container finished" podID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerID="848f6d2a9bdb273197dbb042f1880e0b0678fd8b90feb91fd69a09bed98ea632" exitCode=0 Nov 28 17:12:17 crc kubenswrapper[4710]: I1128 17:12:17.087007 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" event={"ID":"776c25fb-769e-45f1-bbdd-1ef457e29908","Type":"ContainerDied","Data":"848f6d2a9bdb273197dbb042f1880e0b0678fd8b90feb91fd69a09bed98ea632"} Nov 28 17:12:18 crc kubenswrapper[4710]: I1128 17:12:18.095260 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58dr" event={"ID":"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9","Type":"ContainerStarted","Data":"527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284"} Nov 28 17:12:18 crc kubenswrapper[4710]: I1128 17:12:18.097588 4710 generic.go:334] "Generic (PLEG): container finished" podID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerID="6caf2ae3d90ac0a9325bd40eb3b3351eb55d6c5f68d5a2041784b14c875b81b8" exitCode=0 Nov 28 17:12:18 crc kubenswrapper[4710]: I1128 17:12:18.097648 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" event={"ID":"776c25fb-769e-45f1-bbdd-1ef457e29908","Type":"ContainerDied","Data":"6caf2ae3d90ac0a9325bd40eb3b3351eb55d6c5f68d5a2041784b14c875b81b8"} Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.450407 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.593903 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-bundle\") pod \"776c25fb-769e-45f1-bbdd-1ef457e29908\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.593956 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-util\") pod \"776c25fb-769e-45f1-bbdd-1ef457e29908\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.594039 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slpg7\" (UniqueName: \"kubernetes.io/projected/776c25fb-769e-45f1-bbdd-1ef457e29908-kube-api-access-slpg7\") pod \"776c25fb-769e-45f1-bbdd-1ef457e29908\" (UID: \"776c25fb-769e-45f1-bbdd-1ef457e29908\") " Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.594957 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-bundle" (OuterVolumeSpecName: "bundle") pod "776c25fb-769e-45f1-bbdd-1ef457e29908" (UID: "776c25fb-769e-45f1-bbdd-1ef457e29908"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.602546 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/776c25fb-769e-45f1-bbdd-1ef457e29908-kube-api-access-slpg7" (OuterVolumeSpecName: "kube-api-access-slpg7") pod "776c25fb-769e-45f1-bbdd-1ef457e29908" (UID: "776c25fb-769e-45f1-bbdd-1ef457e29908"). InnerVolumeSpecName "kube-api-access-slpg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.607125 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-util" (OuterVolumeSpecName: "util") pod "776c25fb-769e-45f1-bbdd-1ef457e29908" (UID: "776c25fb-769e-45f1-bbdd-1ef457e29908"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.696791 4710 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.696832 4710 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/776c25fb-769e-45f1-bbdd-1ef457e29908-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:19 crc kubenswrapper[4710]: I1128 17:12:19.696867 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slpg7\" (UniqueName: \"kubernetes.io/projected/776c25fb-769e-45f1-bbdd-1ef457e29908-kube-api-access-slpg7\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:20 crc kubenswrapper[4710]: I1128 17:12:20.114174 4710 generic.go:334] "Generic (PLEG): container finished" podID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerID="527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284" exitCode=0 Nov 28 17:12:20 crc kubenswrapper[4710]: I1128 17:12:20.114286 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58dr" event={"ID":"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9","Type":"ContainerDied","Data":"527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284"} Nov 28 17:12:20 crc kubenswrapper[4710]: I1128 17:12:20.117441 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" event={"ID":"776c25fb-769e-45f1-bbdd-1ef457e29908","Type":"ContainerDied","Data":"f8c146243df2e1f3f850f954a6ebb1c71eb2820468870e09de84a6456f19be54"} Nov 28 17:12:20 crc kubenswrapper[4710]: I1128 17:12:20.117503 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8c146243df2e1f3f850f954a6ebb1c71eb2820468870e09de84a6456f19be54" Nov 28 17:12:20 crc kubenswrapper[4710]: I1128 17:12:20.117550 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2" Nov 28 17:12:21 crc kubenswrapper[4710]: I1128 17:12:21.125383 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58dr" event={"ID":"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9","Type":"ContainerStarted","Data":"e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348"} Nov 28 17:12:21 crc kubenswrapper[4710]: I1128 17:12:21.168993 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m58dr" podStartSLOduration=2.6527490780000003 podStartE2EDuration="6.168972095s" podCreationTimestamp="2025-11-28 17:12:15 +0000 UTC" firstStartedPulling="2025-11-28 17:12:17.085886049 +0000 UTC m=+826.344186134" lastFinishedPulling="2025-11-28 17:12:20.602109086 +0000 UTC m=+829.860409151" observedRunningTime="2025-11-28 17:12:21.163602914 +0000 UTC m=+830.421902959" watchObservedRunningTime="2025-11-28 17:12:21.168972095 +0000 UTC m=+830.427272140" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.003622 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629"] Nov 28 17:12:22 crc kubenswrapper[4710]: E1128 17:12:22.004490 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerName="pull" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.004579 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerName="pull" Nov 28 17:12:22 crc kubenswrapper[4710]: E1128 17:12:22.004659 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerName="util" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.004728 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerName="util" Nov 28 17:12:22 crc kubenswrapper[4710]: E1128 17:12:22.004845 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerName="extract" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.004932 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerName="extract" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.005155 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="776c25fb-769e-45f1-bbdd-1ef457e29908" containerName="extract" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.005995 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.009670 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.009992 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-cqpd5" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.010396 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.031659 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629"] Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.136637 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7s7t\" (UniqueName: \"kubernetes.io/projected/18adf227-ae9c-403d-8fe0-107fdf1c2e76-kube-api-access-n7s7t\") pod \"nmstate-operator-5b5b58f5c8-p7629\" (UID: \"18adf227-ae9c-403d-8fe0-107fdf1c2e76\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.237694 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7s7t\" (UniqueName: \"kubernetes.io/projected/18adf227-ae9c-403d-8fe0-107fdf1c2e76-kube-api-access-n7s7t\") pod \"nmstate-operator-5b5b58f5c8-p7629\" (UID: \"18adf227-ae9c-403d-8fe0-107fdf1c2e76\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.259723 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7s7t\" (UniqueName: \"kubernetes.io/projected/18adf227-ae9c-403d-8fe0-107fdf1c2e76-kube-api-access-n7s7t\") pod \"nmstate-operator-5b5b58f5c8-p7629\" (UID: \"18adf227-ae9c-403d-8fe0-107fdf1c2e76\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.322792 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629" Nov 28 17:12:22 crc kubenswrapper[4710]: I1128 17:12:22.807452 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629"] Nov 28 17:12:23 crc kubenswrapper[4710]: I1128 17:12:23.159891 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629" event={"ID":"18adf227-ae9c-403d-8fe0-107fdf1c2e76","Type":"ContainerStarted","Data":"5b7283ed30a3ac8f19a2476c074754552d5372e84f0ea2fb7b74987dd6e89728"} Nov 28 17:12:25 crc kubenswrapper[4710]: I1128 17:12:25.740710 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:25 crc kubenswrapper[4710]: I1128 17:12:25.740790 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:26 crc kubenswrapper[4710]: I1128 17:12:26.791340 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m58dr" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="registry-server" probeResult="failure" output=< Nov 28 17:12:26 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:12:26 crc kubenswrapper[4710]: > Nov 28 17:12:31 crc kubenswrapper[4710]: I1128 17:12:31.217677 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629" event={"ID":"18adf227-ae9c-403d-8fe0-107fdf1c2e76","Type":"ContainerStarted","Data":"c43a0f76002ac841d23bd6dc420a4810fc6b3e03a3a28c555b620444f3c756cf"} Nov 28 17:12:31 crc kubenswrapper[4710]: I1128 17:12:31.237912 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p7629" podStartSLOduration=4.030978809 podStartE2EDuration="10.237892511s" podCreationTimestamp="2025-11-28 17:12:21 +0000 UTC" firstStartedPulling="2025-11-28 17:12:22.807192254 +0000 UTC m=+832.065492299" lastFinishedPulling="2025-11-28 17:12:29.014105956 +0000 UTC m=+838.272406001" observedRunningTime="2025-11-28 17:12:31.231995223 +0000 UTC m=+840.490295268" watchObservedRunningTime="2025-11-28 17:12:31.237892511 +0000 UTC m=+840.496192556" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.217375 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.219747 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.225534 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-tstjj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.236956 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.237910 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.239767 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.252855 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.262539 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kjwqj"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.265964 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.266631 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.351229 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.352161 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.359127 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.360620 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.360905 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-9wd69" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.361278 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.393708 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/878067d5-b960-4b2e-915c-89c96da9bbc8-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-6l6rt\" (UID: \"878067d5-b960-4b2e-915c-89c96da9bbc8\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.393775 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-ovs-socket\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.393810 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkglz\" (UniqueName: \"kubernetes.io/projected/7a7eea14-e168-46b6-a7e8-2d910b465c4c-kube-api-access-wkglz\") pod \"nmstate-metrics-7f946cbc9-7gkl2\" (UID: \"7a7eea14-e168-46b6-a7e8-2d910b465c4c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.393834 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-dbus-socket\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.393853 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5qvg\" (UniqueName: \"kubernetes.io/projected/878067d5-b960-4b2e-915c-89c96da9bbc8-kube-api-access-h5qvg\") pod \"nmstate-webhook-5f6d4c5ccb-6l6rt\" (UID: \"878067d5-b960-4b2e-915c-89c96da9bbc8\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.393887 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgblh\" (UniqueName: \"kubernetes.io/projected/6e3b7f00-c71e-4a41-82db-9b1910f3233d-kube-api-access-mgblh\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.393933 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-nmstate-lock\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495124 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcnh7\" (UniqueName: \"kubernetes.io/projected/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-kube-api-access-vcnh7\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495218 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495263 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/878067d5-b960-4b2e-915c-89c96da9bbc8-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-6l6rt\" (UID: \"878067d5-b960-4b2e-915c-89c96da9bbc8\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495293 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-ovs-socket\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495333 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkglz\" (UniqueName: \"kubernetes.io/projected/7a7eea14-e168-46b6-a7e8-2d910b465c4c-kube-api-access-wkglz\") pod \"nmstate-metrics-7f946cbc9-7gkl2\" (UID: \"7a7eea14-e168-46b6-a7e8-2d910b465c4c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495353 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495381 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-dbus-socket\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495419 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5qvg\" (UniqueName: \"kubernetes.io/projected/878067d5-b960-4b2e-915c-89c96da9bbc8-kube-api-access-h5qvg\") pod \"nmstate-webhook-5f6d4c5ccb-6l6rt\" (UID: \"878067d5-b960-4b2e-915c-89c96da9bbc8\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:32 crc kubenswrapper[4710]: E1128 17:12:32.495421 4710 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495436 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-ovs-socket\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495493 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgblh\" (UniqueName: \"kubernetes.io/projected/6e3b7f00-c71e-4a41-82db-9b1910f3233d-kube-api-access-mgblh\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: E1128 17:12:32.495506 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/878067d5-b960-4b2e-915c-89c96da9bbc8-tls-key-pair podName:878067d5-b960-4b2e-915c-89c96da9bbc8 nodeName:}" failed. No retries permitted until 2025-11-28 17:12:32.995482863 +0000 UTC m=+842.253782978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/878067d5-b960-4b2e-915c-89c96da9bbc8-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-6l6rt" (UID: "878067d5-b960-4b2e-915c-89c96da9bbc8") : secret "openshift-nmstate-webhook" not found Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495580 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-nmstate-lock\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495686 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-nmstate-lock\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.495859 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/6e3b7f00-c71e-4a41-82db-9b1910f3233d-dbus-socket\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.520259 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkglz\" (UniqueName: \"kubernetes.io/projected/7a7eea14-e168-46b6-a7e8-2d910b465c4c-kube-api-access-wkglz\") pod \"nmstate-metrics-7f946cbc9-7gkl2\" (UID: \"7a7eea14-e168-46b6-a7e8-2d910b465c4c\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.520767 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgblh\" (UniqueName: \"kubernetes.io/projected/6e3b7f00-c71e-4a41-82db-9b1910f3233d-kube-api-access-mgblh\") pod \"nmstate-handler-kjwqj\" (UID: \"6e3b7f00-c71e-4a41-82db-9b1910f3233d\") " pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.526445 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5qvg\" (UniqueName: \"kubernetes.io/projected/878067d5-b960-4b2e-915c-89c96da9bbc8-kube-api-access-h5qvg\") pod \"nmstate-webhook-5f6d4c5ccb-6l6rt\" (UID: \"878067d5-b960-4b2e-915c-89c96da9bbc8\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.540903 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.568666 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65d85fb994-5k4pd"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.569504 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.582462 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.597075 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcnh7\" (UniqueName: \"kubernetes.io/projected/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-kube-api-access-vcnh7\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.597163 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.597218 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.598660 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.601539 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.601672 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65d85fb994-5k4pd"] Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.619803 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcnh7\" (UniqueName: \"kubernetes.io/projected/af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e-kube-api-access-vcnh7\") pod \"nmstate-console-plugin-7fbb5f6569-tmz97\" (UID: \"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: W1128 17:12:32.636567 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e3b7f00_c71e_4a41_82db_9b1910f3233d.slice/crio-3cacdcedb9fb9b20090e3dbd4261e8f02bc7e33026ff510191611b4bd52f581a WatchSource:0}: Error finding container 3cacdcedb9fb9b20090e3dbd4261e8f02bc7e33026ff510191611b4bd52f581a: Status 404 returned error can't find the container with id 3cacdcedb9fb9b20090e3dbd4261e8f02bc7e33026ff510191611b4bd52f581a Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.668260 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.700678 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-config\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.700782 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-trusted-ca-bundle\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.700829 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-oauth-config\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.700907 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-oauth-serving-cert\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.701067 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-service-ca\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.701141 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-serving-cert\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.701177 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgp8x\" (UniqueName: \"kubernetes.io/projected/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-kube-api-access-hgp8x\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.838647 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-serving-cert\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.838961 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgp8x\" (UniqueName: \"kubernetes.io/projected/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-kube-api-access-hgp8x\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.839014 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-config\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.839056 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-trusted-ca-bundle\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.839091 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-oauth-config\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.839138 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-oauth-serving-cert\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.839197 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-service-ca\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.840430 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-service-ca\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.840443 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-config\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.841356 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-trusted-ca-bundle\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.842066 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-oauth-serving-cert\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.845869 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-serving-cert\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:32 crc kubenswrapper[4710]: I1128 17:12:32.866845 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgp8x\" (UniqueName: \"kubernetes.io/projected/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-kube-api-access-hgp8x\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.041712 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/878067d5-b960-4b2e-915c-89c96da9bbc8-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-6l6rt\" (UID: \"878067d5-b960-4b2e-915c-89c96da9bbc8\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.046348 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/878067d5-b960-4b2e-915c-89c96da9bbc8-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-6l6rt\" (UID: \"878067d5-b960-4b2e-915c-89c96da9bbc8\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.050858 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8e63bc17-5a16-45d2-bf26-954cdcc5bcd4-console-oauth-config\") pod \"console-65d85fb994-5k4pd\" (UID: \"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4\") " pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.112337 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2"] Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.154893 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.234522 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" event={"ID":"7a7eea14-e168-46b6-a7e8-2d910b465c4c","Type":"ContainerStarted","Data":"08bc3d04d9012005e67b5235327702ee60b734611a47dc1f56b45b9f4ef1e6b7"} Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.235379 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kjwqj" event={"ID":"6e3b7f00-c71e-4a41-82db-9b1910f3233d","Type":"ContainerStarted","Data":"3cacdcedb9fb9b20090e3dbd4261e8f02bc7e33026ff510191611b4bd52f581a"} Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.242643 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97"] Nov 28 17:12:33 crc kubenswrapper[4710]: W1128 17:12:33.247107 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf5831ae_b1bc_4a39_b1bb_6e3c8fb27e0e.slice/crio-437d755123958c587b210404fb75979fdaef6ab7c01031964c09f797c171daf9 WatchSource:0}: Error finding container 437d755123958c587b210404fb75979fdaef6ab7c01031964c09f797c171daf9: Status 404 returned error can't find the container with id 437d755123958c587b210404fb75979fdaef6ab7c01031964c09f797c171daf9 Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.340685 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.578954 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt"] Nov 28 17:12:33 crc kubenswrapper[4710]: W1128 17:12:33.585676 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod878067d5_b960_4b2e_915c_89c96da9bbc8.slice/crio-c0a465501ce2a25771ca12e2dde7b9e81c8ae511f6127c14ad454e3e03e8f28b WatchSource:0}: Error finding container c0a465501ce2a25771ca12e2dde7b9e81c8ae511f6127c14ad454e3e03e8f28b: Status 404 returned error can't find the container with id c0a465501ce2a25771ca12e2dde7b9e81c8ae511f6127c14ad454e3e03e8f28b Nov 28 17:12:33 crc kubenswrapper[4710]: I1128 17:12:33.621115 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65d85fb994-5k4pd"] Nov 28 17:12:33 crc kubenswrapper[4710]: W1128 17:12:33.625125 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e63bc17_5a16_45d2_bf26_954cdcc5bcd4.slice/crio-08d41f6865c7c4b34440a53ef444ec2878d690812a4030dc519caa5eb68d8c20 WatchSource:0}: Error finding container 08d41f6865c7c4b34440a53ef444ec2878d690812a4030dc519caa5eb68d8c20: Status 404 returned error can't find the container with id 08d41f6865c7c4b34440a53ef444ec2878d690812a4030dc519caa5eb68d8c20 Nov 28 17:12:34 crc kubenswrapper[4710]: I1128 17:12:34.241276 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" event={"ID":"878067d5-b960-4b2e-915c-89c96da9bbc8","Type":"ContainerStarted","Data":"c0a465501ce2a25771ca12e2dde7b9e81c8ae511f6127c14ad454e3e03e8f28b"} Nov 28 17:12:34 crc kubenswrapper[4710]: I1128 17:12:34.242406 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" event={"ID":"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e","Type":"ContainerStarted","Data":"437d755123958c587b210404fb75979fdaef6ab7c01031964c09f797c171daf9"} Nov 28 17:12:34 crc kubenswrapper[4710]: I1128 17:12:34.244111 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65d85fb994-5k4pd" event={"ID":"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4","Type":"ContainerStarted","Data":"93d1c775698bbbb91e64137e7e3df458312f4f581f558e9684900369a7a15016"} Nov 28 17:12:34 crc kubenswrapper[4710]: I1128 17:12:34.244145 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65d85fb994-5k4pd" event={"ID":"8e63bc17-5a16-45d2-bf26-954cdcc5bcd4","Type":"ContainerStarted","Data":"08d41f6865c7c4b34440a53ef444ec2878d690812a4030dc519caa5eb68d8c20"} Nov 28 17:12:34 crc kubenswrapper[4710]: I1128 17:12:34.268294 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65d85fb994-5k4pd" podStartSLOduration=2.268270804 podStartE2EDuration="2.268270804s" podCreationTimestamp="2025-11-28 17:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:12:34.266608522 +0000 UTC m=+843.524908567" watchObservedRunningTime="2025-11-28 17:12:34.268270804 +0000 UTC m=+843.526570889" Nov 28 17:12:35 crc kubenswrapper[4710]: I1128 17:12:35.810786 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:35 crc kubenswrapper[4710]: I1128 17:12:35.854462 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:36 crc kubenswrapper[4710]: I1128 17:12:36.042005 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m58dr"] Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.278222 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m58dr" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="registry-server" containerID="cri-o://e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348" gracePeriod=2 Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.627498 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.782849 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-catalog-content\") pod \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.782945 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6h4j\" (UniqueName: \"kubernetes.io/projected/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-kube-api-access-j6h4j\") pod \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.783068 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-utilities\") pod \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\" (UID: \"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9\") " Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.783970 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-utilities" (OuterVolumeSpecName: "utilities") pod "49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" (UID: "49d56ff0-aedf-42dd-9bcd-1fba3039d5a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.789577 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-kube-api-access-j6h4j" (OuterVolumeSpecName: "kube-api-access-j6h4j") pod "49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" (UID: "49d56ff0-aedf-42dd-9bcd-1fba3039d5a9"). InnerVolumeSpecName "kube-api-access-j6h4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.885276 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.885314 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6h4j\" (UniqueName: \"kubernetes.io/projected/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-kube-api-access-j6h4j\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.893315 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" (UID: "49d56ff0-aedf-42dd-9bcd-1fba3039d5a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:37 crc kubenswrapper[4710]: I1128 17:12:37.986406 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.291157 4710 generic.go:334] "Generic (PLEG): container finished" podID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerID="e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348" exitCode=0 Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.291245 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m58dr" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.291268 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58dr" event={"ID":"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9","Type":"ContainerDied","Data":"e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348"} Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.294620 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m58dr" event={"ID":"49d56ff0-aedf-42dd-9bcd-1fba3039d5a9","Type":"ContainerDied","Data":"f5dbde91f6e7181bfdb32b44b2d411157c075587f1449ff72cb038b9577a7d10"} Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.294839 4710 scope.go:117] "RemoveContainer" containerID="e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.300060 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" event={"ID":"af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e","Type":"ContainerStarted","Data":"bbeb90fff2aabecd9c57c6422e791805936379e3d670c930cdd14764b48df8a9"} Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.302544 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kjwqj" event={"ID":"6e3b7f00-c71e-4a41-82db-9b1910f3233d","Type":"ContainerStarted","Data":"36480caf22bce931861c1403a1be0146e7d7312832e709d99d7eed674a80feba"} Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.302624 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.305481 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" event={"ID":"7a7eea14-e168-46b6-a7e8-2d910b465c4c","Type":"ContainerStarted","Data":"e39c2e2a283ab3bfc6abc5d105faeaea62b084ee14f1f10ae452583a6481d4b8"} Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.307460 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" event={"ID":"878067d5-b960-4b2e-915c-89c96da9bbc8","Type":"ContainerStarted","Data":"3039672ad7787700a30def70157bd5a44574042fc5a6b8cd84857a7750d77999"} Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.307926 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.322254 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-tmz97" podStartSLOduration=2.455766768 podStartE2EDuration="6.322234423s" podCreationTimestamp="2025-11-28 17:12:32 +0000 UTC" firstStartedPulling="2025-11-28 17:12:33.250704063 +0000 UTC m=+842.509004108" lastFinishedPulling="2025-11-28 17:12:37.117171718 +0000 UTC m=+846.375471763" observedRunningTime="2025-11-28 17:12:38.318457543 +0000 UTC m=+847.576757598" watchObservedRunningTime="2025-11-28 17:12:38.322234423 +0000 UTC m=+847.580534468" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.340400 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" podStartSLOduration=2.787913977 podStartE2EDuration="6.340376752s" podCreationTimestamp="2025-11-28 17:12:32 +0000 UTC" firstStartedPulling="2025-11-28 17:12:33.590136005 +0000 UTC m=+842.848436050" lastFinishedPulling="2025-11-28 17:12:37.14259878 +0000 UTC m=+846.400898825" observedRunningTime="2025-11-28 17:12:38.337768619 +0000 UTC m=+847.596068684" watchObservedRunningTime="2025-11-28 17:12:38.340376752 +0000 UTC m=+847.598676797" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.342285 4710 scope.go:117] "RemoveContainer" containerID="527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.367813 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m58dr"] Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.377451 4710 scope.go:117] "RemoveContainer" containerID="89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.384791 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m58dr"] Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.388485 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kjwqj" podStartSLOduration=1.905755147 podStartE2EDuration="6.388453917s" podCreationTimestamp="2025-11-28 17:12:32 +0000 UTC" firstStartedPulling="2025-11-28 17:12:32.641703268 +0000 UTC m=+841.900003313" lastFinishedPulling="2025-11-28 17:12:37.124402028 +0000 UTC m=+846.382702083" observedRunningTime="2025-11-28 17:12:38.373483779 +0000 UTC m=+847.631783824" watchObservedRunningTime="2025-11-28 17:12:38.388453917 +0000 UTC m=+847.646753982" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.424476 4710 scope.go:117] "RemoveContainer" containerID="e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348" Nov 28 17:12:38 crc kubenswrapper[4710]: E1128 17:12:38.425006 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348\": container with ID starting with e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348 not found: ID does not exist" containerID="e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.425042 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348"} err="failed to get container status \"e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348\": rpc error: code = NotFound desc = could not find container \"e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348\": container with ID starting with e543a7f3e74073a31fabf662fe71b22039b0d490761000bbe358292828afe348 not found: ID does not exist" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.425066 4710 scope.go:117] "RemoveContainer" containerID="527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284" Nov 28 17:12:38 crc kubenswrapper[4710]: E1128 17:12:38.425346 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284\": container with ID starting with 527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284 not found: ID does not exist" containerID="527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.425370 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284"} err="failed to get container status \"527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284\": rpc error: code = NotFound desc = could not find container \"527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284\": container with ID starting with 527054c04631013ed41c29fe895e8b6949c5ca0a5455913899820621aaf87284 not found: ID does not exist" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.425385 4710 scope.go:117] "RemoveContainer" containerID="89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e" Nov 28 17:12:38 crc kubenswrapper[4710]: E1128 17:12:38.425677 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e\": container with ID starting with 89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e not found: ID does not exist" containerID="89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e" Nov 28 17:12:38 crc kubenswrapper[4710]: I1128 17:12:38.425706 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e"} err="failed to get container status \"89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e\": rpc error: code = NotFound desc = could not find container \"89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e\": container with ID starting with 89a91ca2df3ba637e00b6f097c7a5a34aab18f0392c3e063ae2f9d310df5247e not found: ID does not exist" Nov 28 17:12:39 crc kubenswrapper[4710]: I1128 17:12:39.153295 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" path="/var/lib/kubelet/pods/49d56ff0-aedf-42dd-9bcd-1fba3039d5a9/volumes" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.451272 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kp6nc"] Nov 28 17:12:41 crc kubenswrapper[4710]: E1128 17:12:41.451531 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="extract-content" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.451542 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="extract-content" Nov 28 17:12:41 crc kubenswrapper[4710]: E1128 17:12:41.451556 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="extract-utilities" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.451563 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="extract-utilities" Nov 28 17:12:41 crc kubenswrapper[4710]: E1128 17:12:41.451576 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="registry-server" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.451582 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="registry-server" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.451709 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d56ff0-aedf-42dd-9bcd-1fba3039d5a9" containerName="registry-server" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.452605 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.464156 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kp6nc"] Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.556992 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-utilities\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.557094 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7pqx\" (UniqueName: \"kubernetes.io/projected/64d2ed7b-569f-41f5-8198-0b59f43a63f1-kube-api-access-g7pqx\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.557151 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-catalog-content\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.659052 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-utilities\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.659395 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7pqx\" (UniqueName: \"kubernetes.io/projected/64d2ed7b-569f-41f5-8198-0b59f43a63f1-kube-api-access-g7pqx\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.659617 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-catalog-content\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.660020 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-utilities\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.660366 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-catalog-content\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.692686 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7pqx\" (UniqueName: \"kubernetes.io/projected/64d2ed7b-569f-41f5-8198-0b59f43a63f1-kube-api-access-g7pqx\") pod \"certified-operators-kp6nc\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:41 crc kubenswrapper[4710]: I1128 17:12:41.777721 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:42 crc kubenswrapper[4710]: I1128 17:12:42.289340 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kp6nc"] Nov 28 17:12:42 crc kubenswrapper[4710]: W1128 17:12:42.295037 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64d2ed7b_569f_41f5_8198_0b59f43a63f1.slice/crio-d649e311f32930ebfa80c113e70f8d12ca3e6ebb19ad36077c79ce55db1c8182 WatchSource:0}: Error finding container d649e311f32930ebfa80c113e70f8d12ca3e6ebb19ad36077c79ce55db1c8182: Status 404 returned error can't find the container with id d649e311f32930ebfa80c113e70f8d12ca3e6ebb19ad36077c79ce55db1c8182 Nov 28 17:12:42 crc kubenswrapper[4710]: I1128 17:12:42.341804 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" event={"ID":"7a7eea14-e168-46b6-a7e8-2d910b465c4c","Type":"ContainerStarted","Data":"cc4d8dae087fe3750fed33b1384f7ef7150972600570290a44ebb773c18c0af3"} Nov 28 17:12:42 crc kubenswrapper[4710]: I1128 17:12:42.344176 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp6nc" event={"ID":"64d2ed7b-569f-41f5-8198-0b59f43a63f1","Type":"ContainerStarted","Data":"d649e311f32930ebfa80c113e70f8d12ca3e6ebb19ad36077c79ce55db1c8182"} Nov 28 17:12:42 crc kubenswrapper[4710]: I1128 17:12:42.359862 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-7gkl2" podStartSLOduration=1.660198279 podStartE2EDuration="10.35984597s" podCreationTimestamp="2025-11-28 17:12:32 +0000 UTC" firstStartedPulling="2025-11-28 17:12:33.118051599 +0000 UTC m=+842.376351644" lastFinishedPulling="2025-11-28 17:12:41.81769929 +0000 UTC m=+851.075999335" observedRunningTime="2025-11-28 17:12:42.357247057 +0000 UTC m=+851.615547102" watchObservedRunningTime="2025-11-28 17:12:42.35984597 +0000 UTC m=+851.618146015" Nov 28 17:12:42 crc kubenswrapper[4710]: I1128 17:12:42.605708 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kjwqj" Nov 28 17:12:43 crc kubenswrapper[4710]: I1128 17:12:43.342103 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:43 crc kubenswrapper[4710]: I1128 17:12:43.342438 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:43 crc kubenswrapper[4710]: I1128 17:12:43.350942 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:43 crc kubenswrapper[4710]: I1128 17:12:43.352585 4710 generic.go:334] "Generic (PLEG): container finished" podID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerID="b8019be1f4cb23aa739c21a5da50ae7544d3c5c35ccb51d085d2453b70753f83" exitCode=0 Nov 28 17:12:43 crc kubenswrapper[4710]: I1128 17:12:43.352626 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp6nc" event={"ID":"64d2ed7b-569f-41f5-8198-0b59f43a63f1","Type":"ContainerDied","Data":"b8019be1f4cb23aa739c21a5da50ae7544d3c5c35ccb51d085d2453b70753f83"} Nov 28 17:12:43 crc kubenswrapper[4710]: I1128 17:12:43.356597 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65d85fb994-5k4pd" Nov 28 17:12:43 crc kubenswrapper[4710]: I1128 17:12:43.423449 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z7cgp"] Nov 28 17:12:45 crc kubenswrapper[4710]: I1128 17:12:45.375018 4710 generic.go:334] "Generic (PLEG): container finished" podID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerID="36093ea74bab42df1d0cd703c89040e24684977eac16b0db42078c4cc42731f0" exitCode=0 Nov 28 17:12:45 crc kubenswrapper[4710]: I1128 17:12:45.375148 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp6nc" event={"ID":"64d2ed7b-569f-41f5-8198-0b59f43a63f1","Type":"ContainerDied","Data":"36093ea74bab42df1d0cd703c89040e24684977eac16b0db42078c4cc42731f0"} Nov 28 17:12:47 crc kubenswrapper[4710]: I1128 17:12:47.392074 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp6nc" event={"ID":"64d2ed7b-569f-41f5-8198-0b59f43a63f1","Type":"ContainerStarted","Data":"eb4070de43fe4baa2ffb5e55e510b5583c050238a9f565ab42714abc0a834ea1"} Nov 28 17:12:51 crc kubenswrapper[4710]: I1128 17:12:51.779201 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:51 crc kubenswrapper[4710]: I1128 17:12:51.779866 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:51 crc kubenswrapper[4710]: I1128 17:12:51.824296 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:51 crc kubenswrapper[4710]: I1128 17:12:51.849896 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kp6nc" podStartSLOduration=6.977040301 podStartE2EDuration="10.849875569s" podCreationTimestamp="2025-11-28 17:12:41 +0000 UTC" firstStartedPulling="2025-11-28 17:12:43.358738746 +0000 UTC m=+852.617038791" lastFinishedPulling="2025-11-28 17:12:47.231574014 +0000 UTC m=+856.489874059" observedRunningTime="2025-11-28 17:12:47.411994411 +0000 UTC m=+856.670294456" watchObservedRunningTime="2025-11-28 17:12:51.849875569 +0000 UTC m=+861.108175624" Nov 28 17:12:52 crc kubenswrapper[4710]: I1128 17:12:52.469096 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:52 crc kubenswrapper[4710]: I1128 17:12:52.508846 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kp6nc"] Nov 28 17:12:53 crc kubenswrapper[4710]: I1128 17:12:53.160950 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-6l6rt" Nov 28 17:12:54 crc kubenswrapper[4710]: I1128 17:12:54.436454 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kp6nc" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerName="registry-server" containerID="cri-o://eb4070de43fe4baa2ffb5e55e510b5583c050238a9f565ab42714abc0a834ea1" gracePeriod=2 Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.457926 4710 generic.go:334] "Generic (PLEG): container finished" podID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerID="eb4070de43fe4baa2ffb5e55e510b5583c050238a9f565ab42714abc0a834ea1" exitCode=0 Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.457994 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp6nc" event={"ID":"64d2ed7b-569f-41f5-8198-0b59f43a63f1","Type":"ContainerDied","Data":"eb4070de43fe4baa2ffb5e55e510b5583c050238a9f565ab42714abc0a834ea1"} Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.627437 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.799484 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7pqx\" (UniqueName: \"kubernetes.io/projected/64d2ed7b-569f-41f5-8198-0b59f43a63f1-kube-api-access-g7pqx\") pod \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.799786 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-utilities\") pod \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.799856 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-catalog-content\") pod \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\" (UID: \"64d2ed7b-569f-41f5-8198-0b59f43a63f1\") " Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.800600 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-utilities" (OuterVolumeSpecName: "utilities") pod "64d2ed7b-569f-41f5-8198-0b59f43a63f1" (UID: "64d2ed7b-569f-41f5-8198-0b59f43a63f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.806808 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64d2ed7b-569f-41f5-8198-0b59f43a63f1-kube-api-access-g7pqx" (OuterVolumeSpecName: "kube-api-access-g7pqx") pod "64d2ed7b-569f-41f5-8198-0b59f43a63f1" (UID: "64d2ed7b-569f-41f5-8198-0b59f43a63f1"). InnerVolumeSpecName "kube-api-access-g7pqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.846739 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64d2ed7b-569f-41f5-8198-0b59f43a63f1" (UID: "64d2ed7b-569f-41f5-8198-0b59f43a63f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.901422 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.901460 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d2ed7b-569f-41f5-8198-0b59f43a63f1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:56 crc kubenswrapper[4710]: I1128 17:12:56.901471 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7pqx\" (UniqueName: \"kubernetes.io/projected/64d2ed7b-569f-41f5-8198-0b59f43a63f1-kube-api-access-g7pqx\") on node \"crc\" DevicePath \"\"" Nov 28 17:12:57 crc kubenswrapper[4710]: I1128 17:12:57.468998 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp6nc" event={"ID":"64d2ed7b-569f-41f5-8198-0b59f43a63f1","Type":"ContainerDied","Data":"d649e311f32930ebfa80c113e70f8d12ca3e6ebb19ad36077c79ce55db1c8182"} Nov 28 17:12:57 crc kubenswrapper[4710]: I1128 17:12:57.469055 4710 scope.go:117] "RemoveContainer" containerID="eb4070de43fe4baa2ffb5e55e510b5583c050238a9f565ab42714abc0a834ea1" Nov 28 17:12:57 crc kubenswrapper[4710]: I1128 17:12:57.469100 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp6nc" Nov 28 17:12:57 crc kubenswrapper[4710]: I1128 17:12:57.493013 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kp6nc"] Nov 28 17:12:57 crc kubenswrapper[4710]: I1128 17:12:57.493795 4710 scope.go:117] "RemoveContainer" containerID="36093ea74bab42df1d0cd703c89040e24684977eac16b0db42078c4cc42731f0" Nov 28 17:12:57 crc kubenswrapper[4710]: I1128 17:12:57.499242 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kp6nc"] Nov 28 17:12:57 crc kubenswrapper[4710]: I1128 17:12:57.511673 4710 scope.go:117] "RemoveContainer" containerID="b8019be1f4cb23aa739c21a5da50ae7544d3c5c35ccb51d085d2453b70753f83" Nov 28 17:12:59 crc kubenswrapper[4710]: I1128 17:12:59.155226 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" path="/var/lib/kubelet/pods/64d2ed7b-569f-41f5-8198-0b59f43a63f1/volumes" Nov 28 17:13:08 crc kubenswrapper[4710]: I1128 17:13:08.471474 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-z7cgp" podUID="2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" containerName="console" containerID="cri-o://f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f" gracePeriod=15 Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.388023 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z7cgp_2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3/console/0.log" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.388272 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.500937 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-service-ca\") pod \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.500992 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnpfh\" (UniqueName: \"kubernetes.io/projected/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-kube-api-access-pnpfh\") pod \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.501030 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-oauth-serving-cert\") pod \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.501054 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-serving-cert\") pod \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.501129 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-oauth-config\") pod \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.501161 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-config\") pod \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.501203 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-trusted-ca-bundle\") pod \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\" (UID: \"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3\") " Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.502005 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" (UID: "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.502462 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-config" (OuterVolumeSpecName: "console-config") pod "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" (UID: "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.502456 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-service-ca" (OuterVolumeSpecName: "service-ca") pod "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" (UID: "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.502553 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" (UID: "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.511141 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" (UID: "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.512014 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-kube-api-access-pnpfh" (OuterVolumeSpecName: "kube-api-access-pnpfh") pod "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" (UID: "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3"). InnerVolumeSpecName "kube-api-access-pnpfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.512714 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" (UID: "2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.602538 4710 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.602574 4710 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.602585 4710 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.602594 4710 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-console-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.602603 4710 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.602611 4710 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.602618 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnpfh\" (UniqueName: \"kubernetes.io/projected/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3-kube-api-access-pnpfh\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.609641 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-z7cgp_2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3/console/0.log" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.609688 4710 generic.go:334] "Generic (PLEG): container finished" podID="2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" containerID="f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f" exitCode=2 Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.609716 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z7cgp" event={"ID":"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3","Type":"ContainerDied","Data":"f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f"} Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.609741 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-z7cgp" event={"ID":"2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3","Type":"ContainerDied","Data":"9b794631d23025612db4cd0dc4d84121fb726661e2ad09c3d22241a3722ad698"} Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.609775 4710 scope.go:117] "RemoveContainer" containerID="f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.609856 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-z7cgp" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.639301 4710 scope.go:117] "RemoveContainer" containerID="f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f" Nov 28 17:13:09 crc kubenswrapper[4710]: E1128 17:13:09.639649 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f\": container with ID starting with f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f not found: ID does not exist" containerID="f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.639688 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f"} err="failed to get container status \"f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f\": rpc error: code = NotFound desc = could not find container \"f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f\": container with ID starting with f100fa32fb3843dfeb96a43f9d85c7bfb815a4757975414e764fbd7cfc2a5f9f not found: ID does not exist" Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.640230 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-z7cgp"] Nov 28 17:13:09 crc kubenswrapper[4710]: I1128 17:13:09.644790 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-z7cgp"] Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.151645 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" path="/var/lib/kubelet/pods/2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3/volumes" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.396963 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2"] Nov 28 17:13:11 crc kubenswrapper[4710]: E1128 17:13:11.397268 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" containerName="console" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.397283 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" containerName="console" Nov 28 17:13:11 crc kubenswrapper[4710]: E1128 17:13:11.397293 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerName="extract-content" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.397307 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerName="extract-content" Nov 28 17:13:11 crc kubenswrapper[4710]: E1128 17:13:11.397333 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerName="extract-utilities" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.397344 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerName="extract-utilities" Nov 28 17:13:11 crc kubenswrapper[4710]: E1128 17:13:11.397359 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerName="registry-server" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.397368 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerName="registry-server" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.397571 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="2afb75a4-3327-4ac7-b503-a5bfbf6f3fa3" containerName="console" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.397595 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="64d2ed7b-569f-41f5-8198-0b59f43a63f1" containerName="registry-server" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.398842 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.400895 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.404885 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2"] Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.427379 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.427450 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4xn\" (UniqueName: \"kubernetes.io/projected/39e81a6e-82aa-4fbe-9e06-4854b233df2e-kube-api-access-ng4xn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.427543 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.528727 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.528815 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4xn\" (UniqueName: \"kubernetes.io/projected/39e81a6e-82aa-4fbe-9e06-4854b233df2e-kube-api-access-ng4xn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.528846 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.529206 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.529291 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.552715 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4xn\" (UniqueName: \"kubernetes.io/projected/39e81a6e-82aa-4fbe-9e06-4854b233df2e-kube-api-access-ng4xn\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:11 crc kubenswrapper[4710]: I1128 17:13:11.716696 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:12 crc kubenswrapper[4710]: I1128 17:13:12.159195 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2"] Nov 28 17:13:12 crc kubenswrapper[4710]: W1128 17:13:12.166857 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39e81a6e_82aa_4fbe_9e06_4854b233df2e.slice/crio-4aaa618f06cfd9960928638d8005427f282a5a46a76956fc8ccb9ebce793d806 WatchSource:0}: Error finding container 4aaa618f06cfd9960928638d8005427f282a5a46a76956fc8ccb9ebce793d806: Status 404 returned error can't find the container with id 4aaa618f06cfd9960928638d8005427f282a5a46a76956fc8ccb9ebce793d806 Nov 28 17:13:12 crc kubenswrapper[4710]: I1128 17:13:12.629990 4710 generic.go:334] "Generic (PLEG): container finished" podID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerID="ef53d8f6cba38c17d67cb8ab53a26003ca5a98fb3562e66214a0c8384737c6da" exitCode=0 Nov 28 17:13:12 crc kubenswrapper[4710]: I1128 17:13:12.630287 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" event={"ID":"39e81a6e-82aa-4fbe-9e06-4854b233df2e","Type":"ContainerDied","Data":"ef53d8f6cba38c17d67cb8ab53a26003ca5a98fb3562e66214a0c8384737c6da"} Nov 28 17:13:12 crc kubenswrapper[4710]: I1128 17:13:12.630312 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" event={"ID":"39e81a6e-82aa-4fbe-9e06-4854b233df2e","Type":"ContainerStarted","Data":"4aaa618f06cfd9960928638d8005427f282a5a46a76956fc8ccb9ebce793d806"} Nov 28 17:13:14 crc kubenswrapper[4710]: I1128 17:13:14.646264 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" event={"ID":"39e81a6e-82aa-4fbe-9e06-4854b233df2e","Type":"ContainerStarted","Data":"dabb1626ab3164db0929893f2570b7c40d575e582f2f422e8fc0a54182beabb6"} Nov 28 17:13:15 crc kubenswrapper[4710]: I1128 17:13:15.653454 4710 generic.go:334] "Generic (PLEG): container finished" podID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerID="dabb1626ab3164db0929893f2570b7c40d575e582f2f422e8fc0a54182beabb6" exitCode=0 Nov 28 17:13:15 crc kubenswrapper[4710]: I1128 17:13:15.653537 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" event={"ID":"39e81a6e-82aa-4fbe-9e06-4854b233df2e","Type":"ContainerDied","Data":"dabb1626ab3164db0929893f2570b7c40d575e582f2f422e8fc0a54182beabb6"} Nov 28 17:13:16 crc kubenswrapper[4710]: I1128 17:13:16.664265 4710 generic.go:334] "Generic (PLEG): container finished" podID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerID="33cab89b6492862ea2d18a304512abfb0bb3c373c304a46847eb285044a82bbb" exitCode=0 Nov 28 17:13:16 crc kubenswrapper[4710]: I1128 17:13:16.664305 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" event={"ID":"39e81a6e-82aa-4fbe-9e06-4854b233df2e","Type":"ContainerDied","Data":"33cab89b6492862ea2d18a304512abfb0bb3c373c304a46847eb285044a82bbb"} Nov 28 17:13:17 crc kubenswrapper[4710]: I1128 17:13:17.948430 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.121276 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-bundle\") pod \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.121532 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng4xn\" (UniqueName: \"kubernetes.io/projected/39e81a6e-82aa-4fbe-9e06-4854b233df2e-kube-api-access-ng4xn\") pod \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.121657 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-util\") pod \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\" (UID: \"39e81a6e-82aa-4fbe-9e06-4854b233df2e\") " Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.122289 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-bundle" (OuterVolumeSpecName: "bundle") pod "39e81a6e-82aa-4fbe-9e06-4854b233df2e" (UID: "39e81a6e-82aa-4fbe-9e06-4854b233df2e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.127159 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39e81a6e-82aa-4fbe-9e06-4854b233df2e-kube-api-access-ng4xn" (OuterVolumeSpecName: "kube-api-access-ng4xn") pod "39e81a6e-82aa-4fbe-9e06-4854b233df2e" (UID: "39e81a6e-82aa-4fbe-9e06-4854b233df2e"). InnerVolumeSpecName "kube-api-access-ng4xn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.133811 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-util" (OuterVolumeSpecName: "util") pod "39e81a6e-82aa-4fbe-9e06-4854b233df2e" (UID: "39e81a6e-82aa-4fbe-9e06-4854b233df2e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.223973 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng4xn\" (UniqueName: \"kubernetes.io/projected/39e81a6e-82aa-4fbe-9e06-4854b233df2e-kube-api-access-ng4xn\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.224032 4710 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.224042 4710 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/39e81a6e-82aa-4fbe-9e06-4854b233df2e-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.681541 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" event={"ID":"39e81a6e-82aa-4fbe-9e06-4854b233df2e","Type":"ContainerDied","Data":"4aaa618f06cfd9960928638d8005427f282a5a46a76956fc8ccb9ebce793d806"} Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.681589 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aaa618f06cfd9960928638d8005427f282a5a46a76956fc8ccb9ebce793d806" Nov 28 17:13:18 crc kubenswrapper[4710]: I1128 17:13:18.681595 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.916894 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79"] Nov 28 17:13:28 crc kubenswrapper[4710]: E1128 17:13:28.917370 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerName="pull" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.917382 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerName="pull" Nov 28 17:13:28 crc kubenswrapper[4710]: E1128 17:13:28.917396 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerName="util" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.917403 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerName="util" Nov 28 17:13:28 crc kubenswrapper[4710]: E1128 17:13:28.917416 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerName="extract" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.917422 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerName="extract" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.917531 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="39e81a6e-82aa-4fbe-9e06-4854b233df2e" containerName="extract" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.917992 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.925141 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.925167 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.925425 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-kxhxs" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.925345 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.925564 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 28 17:13:28 crc kubenswrapper[4710]: I1128 17:13:28.946391 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79"] Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.077097 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbc5m\" (UniqueName: \"kubernetes.io/projected/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-kube-api-access-vbc5m\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.077140 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-webhook-cert\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.077163 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-apiservice-cert\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.178412 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbc5m\" (UniqueName: \"kubernetes.io/projected/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-kube-api-access-vbc5m\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.178458 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-webhook-cert\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.178482 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-apiservice-cert\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.183805 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-apiservice-cert\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.195850 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-webhook-cert\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.206093 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbc5m\" (UniqueName: \"kubernetes.io/projected/a01a3bc0-24e8-423f-87c8-32a5cca2be0a-kube-api-access-vbc5m\") pod \"metallb-operator-controller-manager-76d88688fb-s4d79\" (UID: \"a01a3bc0-24e8-423f-87c8-32a5cca2be0a\") " pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.236335 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.281911 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk"] Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.287848 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.294125 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.294481 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-knk7s" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.294586 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.307440 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk"] Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.483781 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-apiservice-cert\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.484105 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdtqk\" (UniqueName: \"kubernetes.io/projected/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-kube-api-access-pdtqk\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.484165 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-webhook-cert\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.585402 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-apiservice-cert\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.585446 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdtqk\" (UniqueName: \"kubernetes.io/projected/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-kube-api-access-pdtqk\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.585503 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-webhook-cert\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.592774 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-webhook-cert\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.594391 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-apiservice-cert\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.603270 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdtqk\" (UniqueName: \"kubernetes.io/projected/0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e-kube-api-access-pdtqk\") pod \"metallb-operator-webhook-server-6ff9c476c7-v8zvk\" (UID: \"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e\") " pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.653624 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.721402 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6s266"] Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.722826 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.741145 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s266"] Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.755404 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79"] Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.790523 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxqxz\" (UniqueName: \"kubernetes.io/projected/676da5f2-4cba-481a-be5a-25de505d109f-kube-api-access-gxqxz\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.790903 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-utilities\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.791120 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-catalog-content\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.892898 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxqxz\" (UniqueName: \"kubernetes.io/projected/676da5f2-4cba-481a-be5a-25de505d109f-kube-api-access-gxqxz\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.892979 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-utilities\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.893046 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-catalog-content\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.893616 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-catalog-content\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.893657 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-utilities\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:29 crc kubenswrapper[4710]: I1128 17:13:29.921993 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxqxz\" (UniqueName: \"kubernetes.io/projected/676da5f2-4cba-481a-be5a-25de505d109f-kube-api-access-gxqxz\") pod \"redhat-marketplace-6s266\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:30 crc kubenswrapper[4710]: I1128 17:13:30.052711 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:30 crc kubenswrapper[4710]: I1128 17:13:30.180896 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk"] Nov 28 17:13:30 crc kubenswrapper[4710]: W1128 17:13:30.196682 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c20cdd4_c8d5_4bfc_ba23_8cc4b544b27e.slice/crio-bb2ca0c30e563be5447a25390a8d4572a017887ccd97298e6daa8b863fe0d930 WatchSource:0}: Error finding container bb2ca0c30e563be5447a25390a8d4572a017887ccd97298e6daa8b863fe0d930: Status 404 returned error can't find the container with id bb2ca0c30e563be5447a25390a8d4572a017887ccd97298e6daa8b863fe0d930 Nov 28 17:13:30 crc kubenswrapper[4710]: I1128 17:13:30.515064 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s266"] Nov 28 17:13:30 crc kubenswrapper[4710]: W1128 17:13:30.527264 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod676da5f2_4cba_481a_be5a_25de505d109f.slice/crio-27d14f030cbc13fba3b7a300f124710c10efeaec898a1ce8cd7561512e466a8b WatchSource:0}: Error finding container 27d14f030cbc13fba3b7a300f124710c10efeaec898a1ce8cd7561512e466a8b: Status 404 returned error can't find the container with id 27d14f030cbc13fba3b7a300f124710c10efeaec898a1ce8cd7561512e466a8b Nov 28 17:13:30 crc kubenswrapper[4710]: I1128 17:13:30.765212 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" event={"ID":"a01a3bc0-24e8-423f-87c8-32a5cca2be0a","Type":"ContainerStarted","Data":"0dfd07814c7fb6a35e83c940cb4846c9c39ca02ccf7c61ad505ee22f074a2b56"} Nov 28 17:13:30 crc kubenswrapper[4710]: I1128 17:13:30.766513 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" event={"ID":"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e","Type":"ContainerStarted","Data":"bb2ca0c30e563be5447a25390a8d4572a017887ccd97298e6daa8b863fe0d930"} Nov 28 17:13:30 crc kubenswrapper[4710]: I1128 17:13:30.768099 4710 generic.go:334] "Generic (PLEG): container finished" podID="676da5f2-4cba-481a-be5a-25de505d109f" containerID="1e148ba7f3258345877eb76352201a4632c7fd93a67f504e2ca230c2a7d8f61a" exitCode=0 Nov 28 17:13:30 crc kubenswrapper[4710]: I1128 17:13:30.768140 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s266" event={"ID":"676da5f2-4cba-481a-be5a-25de505d109f","Type":"ContainerDied","Data":"1e148ba7f3258345877eb76352201a4632c7fd93a67f504e2ca230c2a7d8f61a"} Nov 28 17:13:30 crc kubenswrapper[4710]: I1128 17:13:30.768167 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s266" event={"ID":"676da5f2-4cba-481a-be5a-25de505d109f","Type":"ContainerStarted","Data":"27d14f030cbc13fba3b7a300f124710c10efeaec898a1ce8cd7561512e466a8b"} Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.319566 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wbkzw"] Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.323143 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.336614 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wbkzw"] Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.515005 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-catalog-content\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.515276 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx5qv\" (UniqueName: \"kubernetes.io/projected/3a3c32f2-ed3b-49e6-a652-47a3eab91254-kube-api-access-hx5qv\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.515336 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-utilities\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.617621 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx5qv\" (UniqueName: \"kubernetes.io/projected/3a3c32f2-ed3b-49e6-a652-47a3eab91254-kube-api-access-hx5qv\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.617712 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-utilities\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.617814 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-catalog-content\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.618389 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-catalog-content\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.618955 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-utilities\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.650048 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx5qv\" (UniqueName: \"kubernetes.io/projected/3a3c32f2-ed3b-49e6-a652-47a3eab91254-kube-api-access-hx5qv\") pod \"community-operators-wbkzw\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:31 crc kubenswrapper[4710]: I1128 17:13:31.651251 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:32 crc kubenswrapper[4710]: I1128 17:13:32.236186 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wbkzw"] Nov 28 17:13:32 crc kubenswrapper[4710]: I1128 17:13:32.793863 4710 generic.go:334] "Generic (PLEG): container finished" podID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerID="851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57" exitCode=0 Nov 28 17:13:32 crc kubenswrapper[4710]: I1128 17:13:32.793947 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbkzw" event={"ID":"3a3c32f2-ed3b-49e6-a652-47a3eab91254","Type":"ContainerDied","Data":"851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57"} Nov 28 17:13:32 crc kubenswrapper[4710]: I1128 17:13:32.793979 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbkzw" event={"ID":"3a3c32f2-ed3b-49e6-a652-47a3eab91254","Type":"ContainerStarted","Data":"a746cd9b2a626430dac3a7b3fbfea6846daca7aad1577f81903b2d819fa1d908"} Nov 28 17:13:32 crc kubenswrapper[4710]: I1128 17:13:32.799349 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:13:32 crc kubenswrapper[4710]: I1128 17:13:32.810929 4710 generic.go:334] "Generic (PLEG): container finished" podID="676da5f2-4cba-481a-be5a-25de505d109f" containerID="023da7b86f380165c28aec9c8d2469bc745368bf6ea21ddeede5b76ec019767c" exitCode=0 Nov 28 17:13:32 crc kubenswrapper[4710]: I1128 17:13:32.810980 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s266" event={"ID":"676da5f2-4cba-481a-be5a-25de505d109f","Type":"ContainerDied","Data":"023da7b86f380165c28aec9c8d2469bc745368bf6ea21ddeede5b76ec019767c"} Nov 28 17:13:37 crc kubenswrapper[4710]: I1128 17:13:37.006059 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" event={"ID":"a01a3bc0-24e8-423f-87c8-32a5cca2be0a","Type":"ContainerStarted","Data":"e1eaec35ddce61abf4ae6a3ab7f1b1e18349d291eba9287ab5a13796d07376ea"} Nov 28 17:13:37 crc kubenswrapper[4710]: I1128 17:13:37.022480 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:13:37 crc kubenswrapper[4710]: I1128 17:13:37.028375 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s266" event={"ID":"676da5f2-4cba-481a-be5a-25de505d109f","Type":"ContainerStarted","Data":"e1273e0b41b57ab4032a66b94f5b8c9924238d18433a5bba10f52f7a197ae8da"} Nov 28 17:13:37 crc kubenswrapper[4710]: I1128 17:13:37.048390 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" podStartSLOduration=2.450192907 podStartE2EDuration="9.048355146s" podCreationTimestamp="2025-11-28 17:13:28 +0000 UTC" firstStartedPulling="2025-11-28 17:13:29.767507022 +0000 UTC m=+899.025807067" lastFinishedPulling="2025-11-28 17:13:36.365669261 +0000 UTC m=+905.623969306" observedRunningTime="2025-11-28 17:13:37.035494136 +0000 UTC m=+906.293794181" watchObservedRunningTime="2025-11-28 17:13:37.048355146 +0000 UTC m=+906.306655191" Nov 28 17:13:37 crc kubenswrapper[4710]: I1128 17:13:37.061356 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6s266" podStartSLOduration=2.472230343 podStartE2EDuration="8.061338851s" podCreationTimestamp="2025-11-28 17:13:29 +0000 UTC" firstStartedPulling="2025-11-28 17:13:30.769734765 +0000 UTC m=+900.028034810" lastFinishedPulling="2025-11-28 17:13:36.358843273 +0000 UTC m=+905.617143318" observedRunningTime="2025-11-28 17:13:37.061032731 +0000 UTC m=+906.319332776" watchObservedRunningTime="2025-11-28 17:13:37.061338851 +0000 UTC m=+906.319638896" Nov 28 17:13:38 crc kubenswrapper[4710]: I1128 17:13:38.040588 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" event={"ID":"0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e","Type":"ContainerStarted","Data":"2d6fc8abbc10c79391cc5359bb8de1b82143b0de557a699f088e0b9eb86e3f2d"} Nov 28 17:13:38 crc kubenswrapper[4710]: I1128 17:13:38.040937 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:13:38 crc kubenswrapper[4710]: I1128 17:13:38.043906 4710 generic.go:334] "Generic (PLEG): container finished" podID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerID="e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892" exitCode=0 Nov 28 17:13:38 crc kubenswrapper[4710]: I1128 17:13:38.044012 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbkzw" event={"ID":"3a3c32f2-ed3b-49e6-a652-47a3eab91254","Type":"ContainerDied","Data":"e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892"} Nov 28 17:13:38 crc kubenswrapper[4710]: I1128 17:13:38.063472 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" podStartSLOduration=2.901567284 podStartE2EDuration="9.06345187s" podCreationTimestamp="2025-11-28 17:13:29 +0000 UTC" firstStartedPulling="2025-11-28 17:13:30.203207516 +0000 UTC m=+899.461507561" lastFinishedPulling="2025-11-28 17:13:36.365092102 +0000 UTC m=+905.623392147" observedRunningTime="2025-11-28 17:13:38.05968674 +0000 UTC m=+907.317986795" watchObservedRunningTime="2025-11-28 17:13:38.06345187 +0000 UTC m=+907.321751925" Nov 28 17:13:39 crc kubenswrapper[4710]: I1128 17:13:39.053271 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbkzw" event={"ID":"3a3c32f2-ed3b-49e6-a652-47a3eab91254","Type":"ContainerStarted","Data":"2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7"} Nov 28 17:13:39 crc kubenswrapper[4710]: I1128 17:13:39.074822 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wbkzw" podStartSLOduration=2.387560654 podStartE2EDuration="8.074803954s" podCreationTimestamp="2025-11-28 17:13:31 +0000 UTC" firstStartedPulling="2025-11-28 17:13:32.799077015 +0000 UTC m=+902.057377060" lastFinishedPulling="2025-11-28 17:13:38.486320315 +0000 UTC m=+907.744620360" observedRunningTime="2025-11-28 17:13:39.069478434 +0000 UTC m=+908.327778479" watchObservedRunningTime="2025-11-28 17:13:39.074803954 +0000 UTC m=+908.333103999" Nov 28 17:13:40 crc kubenswrapper[4710]: I1128 17:13:40.053345 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:40 crc kubenswrapper[4710]: I1128 17:13:40.053394 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:40 crc kubenswrapper[4710]: I1128 17:13:40.108793 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:41 crc kubenswrapper[4710]: I1128 17:13:41.117018 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:41 crc kubenswrapper[4710]: I1128 17:13:41.652231 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:41 crc kubenswrapper[4710]: I1128 17:13:41.652463 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:41 crc kubenswrapper[4710]: I1128 17:13:41.695487 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:43 crc kubenswrapper[4710]: I1128 17:13:43.169937 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:43 crc kubenswrapper[4710]: I1128 17:13:43.343829 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:13:43 crc kubenswrapper[4710]: I1128 17:13:43.343879 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:13:43 crc kubenswrapper[4710]: I1128 17:13:43.915919 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s266"] Nov 28 17:13:43 crc kubenswrapper[4710]: I1128 17:13:43.916396 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6s266" podUID="676da5f2-4cba-481a-be5a-25de505d109f" containerName="registry-server" containerID="cri-o://e1273e0b41b57ab4032a66b94f5b8c9924238d18433a5bba10f52f7a197ae8da" gracePeriod=2 Nov 28 17:13:44 crc kubenswrapper[4710]: I1128 17:13:44.316020 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wbkzw"] Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.099784 4710 generic.go:334] "Generic (PLEG): container finished" podID="676da5f2-4cba-481a-be5a-25de505d109f" containerID="e1273e0b41b57ab4032a66b94f5b8c9924238d18433a5bba10f52f7a197ae8da" exitCode=0 Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.100007 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wbkzw" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerName="registry-server" containerID="cri-o://2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7" gracePeriod=2 Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.100264 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s266" event={"ID":"676da5f2-4cba-481a-be5a-25de505d109f","Type":"ContainerDied","Data":"e1273e0b41b57ab4032a66b94f5b8c9924238d18433a5bba10f52f7a197ae8da"} Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.100292 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6s266" event={"ID":"676da5f2-4cba-481a-be5a-25de505d109f","Type":"ContainerDied","Data":"27d14f030cbc13fba3b7a300f124710c10efeaec898a1ce8cd7561512e466a8b"} Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.100303 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d14f030cbc13fba3b7a300f124710c10efeaec898a1ce8cd7561512e466a8b" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.123360 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.316899 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-catalog-content\") pod \"676da5f2-4cba-481a-be5a-25de505d109f\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.316964 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxqxz\" (UniqueName: \"kubernetes.io/projected/676da5f2-4cba-481a-be5a-25de505d109f-kube-api-access-gxqxz\") pod \"676da5f2-4cba-481a-be5a-25de505d109f\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.317085 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-utilities\") pod \"676da5f2-4cba-481a-be5a-25de505d109f\" (UID: \"676da5f2-4cba-481a-be5a-25de505d109f\") " Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.319905 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-utilities" (OuterVolumeSpecName: "utilities") pod "676da5f2-4cba-481a-be5a-25de505d109f" (UID: "676da5f2-4cba-481a-be5a-25de505d109f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.323962 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676da5f2-4cba-481a-be5a-25de505d109f-kube-api-access-gxqxz" (OuterVolumeSpecName: "kube-api-access-gxqxz") pod "676da5f2-4cba-481a-be5a-25de505d109f" (UID: "676da5f2-4cba-481a-be5a-25de505d109f"). InnerVolumeSpecName "kube-api-access-gxqxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.337766 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "676da5f2-4cba-481a-be5a-25de505d109f" (UID: "676da5f2-4cba-481a-be5a-25de505d109f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.419623 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.419660 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/676da5f2-4cba-481a-be5a-25de505d109f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.419671 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxqxz\" (UniqueName: \"kubernetes.io/projected/676da5f2-4cba-481a-be5a-25de505d109f-kube-api-access-gxqxz\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.468895 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.520968 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-catalog-content\") pod \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.521063 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-utilities\") pod \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.521122 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx5qv\" (UniqueName: \"kubernetes.io/projected/3a3c32f2-ed3b-49e6-a652-47a3eab91254-kube-api-access-hx5qv\") pod \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\" (UID: \"3a3c32f2-ed3b-49e6-a652-47a3eab91254\") " Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.521880 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-utilities" (OuterVolumeSpecName: "utilities") pod "3a3c32f2-ed3b-49e6-a652-47a3eab91254" (UID: "3a3c32f2-ed3b-49e6-a652-47a3eab91254"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.526550 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a3c32f2-ed3b-49e6-a652-47a3eab91254-kube-api-access-hx5qv" (OuterVolumeSpecName: "kube-api-access-hx5qv") pod "3a3c32f2-ed3b-49e6-a652-47a3eab91254" (UID: "3a3c32f2-ed3b-49e6-a652-47a3eab91254"). InnerVolumeSpecName "kube-api-access-hx5qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.600472 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a3c32f2-ed3b-49e6-a652-47a3eab91254" (UID: "3a3c32f2-ed3b-49e6-a652-47a3eab91254"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.622675 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx5qv\" (UniqueName: \"kubernetes.io/projected/3a3c32f2-ed3b-49e6-a652-47a3eab91254-kube-api-access-hx5qv\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.622712 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:45 crc kubenswrapper[4710]: I1128 17:13:45.622723 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a3c32f2-ed3b-49e6-a652-47a3eab91254-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.108149 4710 generic.go:334] "Generic (PLEG): container finished" podID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerID="2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7" exitCode=0 Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.108217 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbkzw" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.108253 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6s266" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.108251 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbkzw" event={"ID":"3a3c32f2-ed3b-49e6-a652-47a3eab91254","Type":"ContainerDied","Data":"2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7"} Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.108288 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbkzw" event={"ID":"3a3c32f2-ed3b-49e6-a652-47a3eab91254","Type":"ContainerDied","Data":"a746cd9b2a626430dac3a7b3fbfea6846daca7aad1577f81903b2d819fa1d908"} Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.108325 4710 scope.go:117] "RemoveContainer" containerID="2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.127087 4710 scope.go:117] "RemoveContainer" containerID="e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.140215 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wbkzw"] Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.151831 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wbkzw"] Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.157002 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s266"] Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.161678 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6s266"] Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.167553 4710 scope.go:117] "RemoveContainer" containerID="851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.184695 4710 scope.go:117] "RemoveContainer" containerID="2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7" Nov 28 17:13:46 crc kubenswrapper[4710]: E1128 17:13:46.185233 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7\": container with ID starting with 2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7 not found: ID does not exist" containerID="2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.185265 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7"} err="failed to get container status \"2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7\": rpc error: code = NotFound desc = could not find container \"2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7\": container with ID starting with 2c261bd0770284a4a4c86a59e9eeeac7f140d6c0c6bdf5a7796af2229cef1ab7 not found: ID does not exist" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.185310 4710 scope.go:117] "RemoveContainer" containerID="e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892" Nov 28 17:13:46 crc kubenswrapper[4710]: E1128 17:13:46.185696 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892\": container with ID starting with e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892 not found: ID does not exist" containerID="e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.185744 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892"} err="failed to get container status \"e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892\": rpc error: code = NotFound desc = could not find container \"e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892\": container with ID starting with e7ea9d3ab461a669582d9f430c0077152da6cea59711ae862fb7485895c78892 not found: ID does not exist" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.185845 4710 scope.go:117] "RemoveContainer" containerID="851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57" Nov 28 17:13:46 crc kubenswrapper[4710]: E1128 17:13:46.186177 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57\": container with ID starting with 851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57 not found: ID does not exist" containerID="851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57" Nov 28 17:13:46 crc kubenswrapper[4710]: I1128 17:13:46.186220 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57"} err="failed to get container status \"851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57\": rpc error: code = NotFound desc = could not find container \"851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57\": container with ID starting with 851de553f5006d4f1a148caea05ec04b8818cd997c4e519744cbe0ccd4dcde57 not found: ID does not exist" Nov 28 17:13:47 crc kubenswrapper[4710]: I1128 17:13:47.151442 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" path="/var/lib/kubelet/pods/3a3c32f2-ed3b-49e6-a652-47a3eab91254/volumes" Nov 28 17:13:47 crc kubenswrapper[4710]: I1128 17:13:47.152216 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676da5f2-4cba-481a-be5a-25de505d109f" path="/var/lib/kubelet/pods/676da5f2-4cba-481a-be5a-25de505d109f/volumes" Nov 28 17:13:49 crc kubenswrapper[4710]: I1128 17:13:49.659158 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6ff9c476c7-v8zvk" Nov 28 17:14:09 crc kubenswrapper[4710]: I1128 17:14:09.239897 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76d88688fb-s4d79" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.082344 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-7t69j"] Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.083088 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerName="extract-utilities" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.083114 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerName="extract-utilities" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.083136 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676da5f2-4cba-481a-be5a-25de505d109f" containerName="extract-content" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.083147 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="676da5f2-4cba-481a-be5a-25de505d109f" containerName="extract-content" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.083165 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerName="extract-content" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.083173 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerName="extract-content" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.083224 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676da5f2-4cba-481a-be5a-25de505d109f" containerName="extract-utilities" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.083232 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="676da5f2-4cba-481a-be5a-25de505d109f" containerName="extract-utilities" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.083245 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676da5f2-4cba-481a-be5a-25de505d109f" containerName="registry-server" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.083253 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="676da5f2-4cba-481a-be5a-25de505d109f" containerName="registry-server" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.083272 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerName="registry-server" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.083281 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerName="registry-server" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.083492 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="676da5f2-4cba-481a-be5a-25de505d109f" containerName="registry-server" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.083510 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a3c32f2-ed3b-49e6-a652-47a3eab91254" containerName="registry-server" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.086697 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.088863 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7x5rg" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.090240 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.090259 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.092833 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp"] Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.095666 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.097729 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.123290 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp"] Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.179227 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kqv5c"] Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.188129 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.200195 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.200386 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.200494 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-zg8jp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.200600 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.209224 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-frr-sockets\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.209951 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftzj7\" (UniqueName: \"kubernetes.io/projected/05aaf633-4b72-414c-bed7-072766131fb5-kube-api-access-ftzj7\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.210136 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-frr-conf\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.210203 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/499217b3-5eff-47d2-ba82-b340a1fa5149-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-pj7zp\" (UID: \"499217b3-5eff-47d2-ba82-b340a1fa5149\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.210333 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdxvt\" (UniqueName: \"kubernetes.io/projected/499217b3-5eff-47d2-ba82-b340a1fa5149-kube-api-access-mdxvt\") pod \"frr-k8s-webhook-server-7fcb986d4-pj7zp\" (UID: \"499217b3-5eff-47d2-ba82-b340a1fa5149\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.210428 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05aaf633-4b72-414c-bed7-072766131fb5-metrics-certs\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.210508 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/05aaf633-4b72-414c-bed7-072766131fb5-frr-startup\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.210528 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-reloader\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.210586 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-metrics\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.227847 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-jxlkv"] Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.229024 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.233026 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.256199 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jxlkv"] Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314581 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metrics-certs\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314636 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftzj7\" (UniqueName: \"kubernetes.io/projected/05aaf633-4b72-414c-bed7-072766131fb5-kube-api-access-ftzj7\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314659 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-cert\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314680 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-frr-conf\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314698 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/499217b3-5eff-47d2-ba82-b340a1fa5149-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-pj7zp\" (UID: \"499217b3-5eff-47d2-ba82-b340a1fa5149\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314715 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78cx7\" (UniqueName: \"kubernetes.io/projected/b1655d12-6e92-47ad-b93b-f664ec03d1d0-kube-api-access-78cx7\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314733 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdxvt\" (UniqueName: \"kubernetes.io/projected/499217b3-5eff-47d2-ba82-b340a1fa5149-kube-api-access-mdxvt\") pod \"frr-k8s-webhook-server-7fcb986d4-pj7zp\" (UID: \"499217b3-5eff-47d2-ba82-b340a1fa5149\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314782 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/05aaf633-4b72-414c-bed7-072766131fb5-frr-startup\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314803 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metallb-excludel2\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314828 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314857 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ddjl\" (UniqueName: \"kubernetes.io/projected/c96510f3-24a2-4722-83c2-a1d39168687b-kube-api-access-9ddjl\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314879 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-frr-sockets\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314917 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05aaf633-4b72-414c-bed7-072766131fb5-metrics-certs\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314938 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-metrics-certs\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314957 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-reloader\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.314975 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-metrics\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.315342 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-metrics\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.315901 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-frr-conf\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.315973 4710 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.316010 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/499217b3-5eff-47d2-ba82-b340a1fa5149-cert podName:499217b3-5eff-47d2-ba82-b340a1fa5149 nodeName:}" failed. No retries permitted until 2025-11-28 17:14:10.815996358 +0000 UTC m=+940.074296403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/499217b3-5eff-47d2-ba82-b340a1fa5149-cert") pod "frr-k8s-webhook-server-7fcb986d4-pj7zp" (UID: "499217b3-5eff-47d2-ba82-b340a1fa5149") : secret "frr-k8s-webhook-server-cert" not found Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.316901 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/05aaf633-4b72-414c-bed7-072766131fb5-frr-startup\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.317092 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-frr-sockets\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.319007 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/05aaf633-4b72-414c-bed7-072766131fb5-reloader\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.322678 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/05aaf633-4b72-414c-bed7-072766131fb5-metrics-certs\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.349331 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftzj7\" (UniqueName: \"kubernetes.io/projected/05aaf633-4b72-414c-bed7-072766131fb5-kube-api-access-ftzj7\") pod \"frr-k8s-7t69j\" (UID: \"05aaf633-4b72-414c-bed7-072766131fb5\") " pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.354618 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdxvt\" (UniqueName: \"kubernetes.io/projected/499217b3-5eff-47d2-ba82-b340a1fa5149-kube-api-access-mdxvt\") pod \"frr-k8s-webhook-server-7fcb986d4-pj7zp\" (UID: \"499217b3-5eff-47d2-ba82-b340a1fa5149\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.416273 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-cert\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.416701 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78cx7\" (UniqueName: \"kubernetes.io/projected/b1655d12-6e92-47ad-b93b-f664ec03d1d0-kube-api-access-78cx7\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.416911 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metallb-excludel2\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.417033 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.417122 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ddjl\" (UniqueName: \"kubernetes.io/projected/c96510f3-24a2-4722-83c2-a1d39168687b-kube-api-access-9ddjl\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.417229 4710 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.417321 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist podName:b1655d12-6e92-47ad-b93b-f664ec03d1d0 nodeName:}" failed. No retries permitted until 2025-11-28 17:14:10.917300674 +0000 UTC m=+940.175600719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist") pod "speaker-kqv5c" (UID: "b1655d12-6e92-47ad-b93b-f664ec03d1d0") : secret "metallb-memberlist" not found Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.417482 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-metrics-certs\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.417599 4710 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.417639 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metallb-excludel2\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.417672 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-metrics-certs podName:c96510f3-24a2-4722-83c2-a1d39168687b nodeName:}" failed. No retries permitted until 2025-11-28 17:14:10.917654495 +0000 UTC m=+940.175954540 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-metrics-certs") pod "controller-f8648f98b-jxlkv" (UID: "c96510f3-24a2-4722-83c2-a1d39168687b") : secret "controller-certs-secret" not found Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.417769 4710 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.417815 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metrics-certs podName:b1655d12-6e92-47ad-b93b-f664ec03d1d0 nodeName:}" failed. No retries permitted until 2025-11-28 17:14:10.9178024 +0000 UTC m=+940.176102455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metrics-certs") pod "speaker-kqv5c" (UID: "b1655d12-6e92-47ad-b93b-f664ec03d1d0") : secret "speaker-certs-secret" not found Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.417870 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metrics-certs\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.419355 4710 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.425512 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.431093 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-cert\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.434438 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78cx7\" (UniqueName: \"kubernetes.io/projected/b1655d12-6e92-47ad-b93b-f664ec03d1d0-kube-api-access-78cx7\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.440910 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ddjl\" (UniqueName: \"kubernetes.io/projected/c96510f3-24a2-4722-83c2-a1d39168687b-kube-api-access-9ddjl\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.824538 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/499217b3-5eff-47d2-ba82-b340a1fa5149-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-pj7zp\" (UID: \"499217b3-5eff-47d2-ba82-b340a1fa5149\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.829778 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/499217b3-5eff-47d2-ba82-b340a1fa5149-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-pj7zp\" (UID: \"499217b3-5eff-47d2-ba82-b340a1fa5149\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.926109 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.926239 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-metrics-certs\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.926301 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metrics-certs\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.926411 4710 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 28 17:14:10 crc kubenswrapper[4710]: E1128 17:14:10.926525 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist podName:b1655d12-6e92-47ad-b93b-f664ec03d1d0 nodeName:}" failed. No retries permitted until 2025-11-28 17:14:11.926495828 +0000 UTC m=+941.184795873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist") pod "speaker-kqv5c" (UID: "b1655d12-6e92-47ad-b93b-f664ec03d1d0") : secret "metallb-memberlist" not found Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.932771 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c96510f3-24a2-4722-83c2-a1d39168687b-metrics-certs\") pod \"controller-f8648f98b-jxlkv\" (UID: \"c96510f3-24a2-4722-83c2-a1d39168687b\") " pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:10 crc kubenswrapper[4710]: I1128 17:14:10.933224 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-metrics-certs\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:11 crc kubenswrapper[4710]: I1128 17:14:11.030002 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:11 crc kubenswrapper[4710]: I1128 17:14:11.172654 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:11 crc kubenswrapper[4710]: I1128 17:14:11.293039 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerStarted","Data":"180c00bc54722bb21fb5df9fa1f06c01e82d0adfa9d21fe3e3b3ad42bc0925e8"} Nov 28 17:14:11 crc kubenswrapper[4710]: I1128 17:14:11.453984 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp"] Nov 28 17:14:11 crc kubenswrapper[4710]: I1128 17:14:11.627142 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-jxlkv"] Nov 28 17:14:11 crc kubenswrapper[4710]: W1128 17:14:11.629666 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc96510f3_24a2_4722_83c2_a1d39168687b.slice/crio-44742e3b769df53c34ec42a816532aa24836a95b9706dab535130759847e444d WatchSource:0}: Error finding container 44742e3b769df53c34ec42a816532aa24836a95b9706dab535130759847e444d: Status 404 returned error can't find the container with id 44742e3b769df53c34ec42a816532aa24836a95b9706dab535130759847e444d Nov 28 17:14:11 crc kubenswrapper[4710]: I1128 17:14:11.943046 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:11 crc kubenswrapper[4710]: I1128 17:14:11.956447 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b1655d12-6e92-47ad-b93b-f664ec03d1d0-memberlist\") pod \"speaker-kqv5c\" (UID: \"b1655d12-6e92-47ad-b93b-f664ec03d1d0\") " pod="metallb-system/speaker-kqv5c" Nov 28 17:14:12 crc kubenswrapper[4710]: I1128 17:14:12.011204 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kqv5c" Nov 28 17:14:12 crc kubenswrapper[4710]: I1128 17:14:12.327515 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jxlkv" event={"ID":"c96510f3-24a2-4722-83c2-a1d39168687b","Type":"ContainerStarted","Data":"1e6f061a4154bd927f36e79959902d460683168387ac77d1d4642cb49a96c63a"} Nov 28 17:14:12 crc kubenswrapper[4710]: I1128 17:14:12.327558 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jxlkv" event={"ID":"c96510f3-24a2-4722-83c2-a1d39168687b","Type":"ContainerStarted","Data":"aef8dffcd505a8e6e82e3357d13ab4b82539d1ed95ca92ca980cd722fbe18ddc"} Nov 28 17:14:12 crc kubenswrapper[4710]: I1128 17:14:12.327569 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-jxlkv" event={"ID":"c96510f3-24a2-4722-83c2-a1d39168687b","Type":"ContainerStarted","Data":"44742e3b769df53c34ec42a816532aa24836a95b9706dab535130759847e444d"} Nov 28 17:14:12 crc kubenswrapper[4710]: I1128 17:14:12.328545 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:12 crc kubenswrapper[4710]: I1128 17:14:12.344032 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" event={"ID":"499217b3-5eff-47d2-ba82-b340a1fa5149","Type":"ContainerStarted","Data":"52ecaf0db6fa1160d685f6b2e71d183df6c577b7c9e889159a0cbd7fb219485f"} Nov 28 17:14:12 crc kubenswrapper[4710]: I1128 17:14:12.369000 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kqv5c" event={"ID":"b1655d12-6e92-47ad-b93b-f664ec03d1d0","Type":"ContainerStarted","Data":"3d3923f03dbbd661b156a4643bfe81c9419e6357304eda82cea49f4fa83a1e0b"} Nov 28 17:14:12 crc kubenswrapper[4710]: I1128 17:14:12.373363 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-jxlkv" podStartSLOduration=2.373340196 podStartE2EDuration="2.373340196s" podCreationTimestamp="2025-11-28 17:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:14:12.369391472 +0000 UTC m=+941.627691527" watchObservedRunningTime="2025-11-28 17:14:12.373340196 +0000 UTC m=+941.631640241" Nov 28 17:14:13 crc kubenswrapper[4710]: I1128 17:14:13.346167 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:14:13 crc kubenswrapper[4710]: I1128 17:14:13.346551 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:14:13 crc kubenswrapper[4710]: I1128 17:14:13.385506 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kqv5c" event={"ID":"b1655d12-6e92-47ad-b93b-f664ec03d1d0","Type":"ContainerStarted","Data":"fbe38e5e44857f77262b883044850daba08c261119338304a6772d3dd7f70cc5"} Nov 28 17:14:13 crc kubenswrapper[4710]: I1128 17:14:13.385543 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kqv5c" event={"ID":"b1655d12-6e92-47ad-b93b-f664ec03d1d0","Type":"ContainerStarted","Data":"63e63039f2ffb80bff4b5075f7b67052ac090e1dcdf1ea5759a426d942851b2c"} Nov 28 17:14:13 crc kubenswrapper[4710]: I1128 17:14:13.385570 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kqv5c" Nov 28 17:14:13 crc kubenswrapper[4710]: I1128 17:14:13.413663 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kqv5c" podStartSLOduration=3.413648109 podStartE2EDuration="3.413648109s" podCreationTimestamp="2025-11-28 17:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:14:13.409858409 +0000 UTC m=+942.668158454" watchObservedRunningTime="2025-11-28 17:14:13.413648109 +0000 UTC m=+942.671948154" Nov 28 17:14:20 crc kubenswrapper[4710]: I1128 17:14:20.437847 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" event={"ID":"499217b3-5eff-47d2-ba82-b340a1fa5149","Type":"ContainerStarted","Data":"bdab2e5bce944eb84db693eb3a208b5953a4364f5b2e6bc06d4ac683aa11ff86"} Nov 28 17:14:20 crc kubenswrapper[4710]: I1128 17:14:20.438374 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:20 crc kubenswrapper[4710]: I1128 17:14:20.441595 4710 generic.go:334] "Generic (PLEG): container finished" podID="05aaf633-4b72-414c-bed7-072766131fb5" containerID="a80169cbf76488acde016f33902ed12255d4a6e91ee5da4d5f71bdac2b4dc6d8" exitCode=0 Nov 28 17:14:20 crc kubenswrapper[4710]: I1128 17:14:20.441622 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerDied","Data":"a80169cbf76488acde016f33902ed12255d4a6e91ee5da4d5f71bdac2b4dc6d8"} Nov 28 17:14:20 crc kubenswrapper[4710]: I1128 17:14:20.461369 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" podStartSLOduration=2.331703349 podStartE2EDuration="10.461338686s" podCreationTimestamp="2025-11-28 17:14:10 +0000 UTC" firstStartedPulling="2025-11-28 17:14:11.462496722 +0000 UTC m=+940.720796767" lastFinishedPulling="2025-11-28 17:14:19.592132049 +0000 UTC m=+948.850432104" observedRunningTime="2025-11-28 17:14:20.449500143 +0000 UTC m=+949.707800198" watchObservedRunningTime="2025-11-28 17:14:20.461338686 +0000 UTC m=+949.719638751" Nov 28 17:14:21 crc kubenswrapper[4710]: I1128 17:14:21.176213 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-jxlkv" Nov 28 17:14:21 crc kubenswrapper[4710]: I1128 17:14:21.451499 4710 generic.go:334] "Generic (PLEG): container finished" podID="05aaf633-4b72-414c-bed7-072766131fb5" containerID="c8483ec517c5206e0b67c8a8e3cc8a9248c90bc17f5ca0a5d4c6b0153a5d24db" exitCode=0 Nov 28 17:14:21 crc kubenswrapper[4710]: I1128 17:14:21.451623 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerDied","Data":"c8483ec517c5206e0b67c8a8e3cc8a9248c90bc17f5ca0a5d4c6b0153a5d24db"} Nov 28 17:14:22 crc kubenswrapper[4710]: I1128 17:14:22.014061 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kqv5c" Nov 28 17:14:22 crc kubenswrapper[4710]: I1128 17:14:22.460929 4710 generic.go:334] "Generic (PLEG): container finished" podID="05aaf633-4b72-414c-bed7-072766131fb5" containerID="9bb16a3b8e8b044f21458fadb2820da893325e0278d0e7ecde3a661caca84976" exitCode=0 Nov 28 17:14:22 crc kubenswrapper[4710]: I1128 17:14:22.460988 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerDied","Data":"9bb16a3b8e8b044f21458fadb2820da893325e0278d0e7ecde3a661caca84976"} Nov 28 17:14:23 crc kubenswrapper[4710]: I1128 17:14:23.474258 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerStarted","Data":"c79e12d80fa8d17eefadca4478fbbb619450ba774c0eb02879b9cfb11cc80546"} Nov 28 17:14:23 crc kubenswrapper[4710]: I1128 17:14:23.474604 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerStarted","Data":"13e9df4eccb3a0ec41d5e4c82225921ee3b341755927087316672a76e5ce700f"} Nov 28 17:14:23 crc kubenswrapper[4710]: I1128 17:14:23.474619 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerStarted","Data":"e83f5a940c1a32bf964f75f39d1e0f3ef4f359c899dde9265b168410d796e686"} Nov 28 17:14:23 crc kubenswrapper[4710]: I1128 17:14:23.474629 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerStarted","Data":"2563bd91a9e4665207df07594d2dbf9628ef645222e7ef4ec6eef0c98e25e452"} Nov 28 17:14:23 crc kubenswrapper[4710]: I1128 17:14:23.474639 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerStarted","Data":"5540e9cb15fa3264a545ce00ba6505d93a4dff28ba9ab5e7856e8651c5d422ec"} Nov 28 17:14:24 crc kubenswrapper[4710]: I1128 17:14:24.485133 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-7t69j" event={"ID":"05aaf633-4b72-414c-bed7-072766131fb5","Type":"ContainerStarted","Data":"89b5aaa4c8585a434c916322727936042959a42fca6205d73a967993d9f3afc1"} Nov 28 17:14:24 crc kubenswrapper[4710]: I1128 17:14:24.485545 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:24 crc kubenswrapper[4710]: I1128 17:14:24.510032 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-7t69j" podStartSLOduration=5.560560162 podStartE2EDuration="14.510011844s" podCreationTimestamp="2025-11-28 17:14:10 +0000 UTC" firstStartedPulling="2025-11-28 17:14:10.607879465 +0000 UTC m=+939.866179510" lastFinishedPulling="2025-11-28 17:14:19.557331147 +0000 UTC m=+948.815631192" observedRunningTime="2025-11-28 17:14:24.502437975 +0000 UTC m=+953.760738040" watchObservedRunningTime="2025-11-28 17:14:24.510011844 +0000 UTC m=+953.768311889" Nov 28 17:14:24 crc kubenswrapper[4710]: I1128 17:14:24.991374 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x54rv"] Nov 28 17:14:24 crc kubenswrapper[4710]: I1128 17:14:24.992540 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x54rv" Nov 28 17:14:24 crc kubenswrapper[4710]: I1128 17:14:24.994875 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 28 17:14:24 crc kubenswrapper[4710]: I1128 17:14:24.995269 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-vmq2b" Nov 28 17:14:24 crc kubenswrapper[4710]: I1128 17:14:24.995402 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 28 17:14:25 crc kubenswrapper[4710]: I1128 17:14:25.011125 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x54rv"] Nov 28 17:14:25 crc kubenswrapper[4710]: I1128 17:14:25.120861 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxkh\" (UniqueName: \"kubernetes.io/projected/19385955-6e16-4c6c-84f6-8bf35bfefe25-kube-api-access-ggxkh\") pod \"openstack-operator-index-x54rv\" (UID: \"19385955-6e16-4c6c-84f6-8bf35bfefe25\") " pod="openstack-operators/openstack-operator-index-x54rv" Nov 28 17:14:25 crc kubenswrapper[4710]: I1128 17:14:25.222295 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxkh\" (UniqueName: \"kubernetes.io/projected/19385955-6e16-4c6c-84f6-8bf35bfefe25-kube-api-access-ggxkh\") pod \"openstack-operator-index-x54rv\" (UID: \"19385955-6e16-4c6c-84f6-8bf35bfefe25\") " pod="openstack-operators/openstack-operator-index-x54rv" Nov 28 17:14:25 crc kubenswrapper[4710]: I1128 17:14:25.257412 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxkh\" (UniqueName: \"kubernetes.io/projected/19385955-6e16-4c6c-84f6-8bf35bfefe25-kube-api-access-ggxkh\") pod \"openstack-operator-index-x54rv\" (UID: \"19385955-6e16-4c6c-84f6-8bf35bfefe25\") " pod="openstack-operators/openstack-operator-index-x54rv" Nov 28 17:14:25 crc kubenswrapper[4710]: I1128 17:14:25.310829 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x54rv" Nov 28 17:14:25 crc kubenswrapper[4710]: I1128 17:14:25.425747 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:25 crc kubenswrapper[4710]: I1128 17:14:25.485019 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:25 crc kubenswrapper[4710]: I1128 17:14:25.658065 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x54rv"] Nov 28 17:14:25 crc kubenswrapper[4710]: W1128 17:14:25.665190 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19385955_6e16_4c6c_84f6_8bf35bfefe25.slice/crio-3f977787a6caa44fe813132c61457d55d245f8ee90b4834b551449fde998d46a WatchSource:0}: Error finding container 3f977787a6caa44fe813132c61457d55d245f8ee90b4834b551449fde998d46a: Status 404 returned error can't find the container with id 3f977787a6caa44fe813132c61457d55d245f8ee90b4834b551449fde998d46a Nov 28 17:14:26 crc kubenswrapper[4710]: I1128 17:14:26.502872 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x54rv" event={"ID":"19385955-6e16-4c6c-84f6-8bf35bfefe25","Type":"ContainerStarted","Data":"3f977787a6caa44fe813132c61457d55d245f8ee90b4834b551449fde998d46a"} Nov 28 17:14:27 crc kubenswrapper[4710]: I1128 17:14:27.771568 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x54rv"] Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.383735 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-dlp9m"] Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.385964 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.405557 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dlp9m"] Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.483085 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf62d\" (UniqueName: \"kubernetes.io/projected/102b7cf3-c9f4-47f9-8472-b3659a7c9b4a-kube-api-access-vf62d\") pod \"openstack-operator-index-dlp9m\" (UID: \"102b7cf3-c9f4-47f9-8472-b3659a7c9b4a\") " pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.528829 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x54rv" event={"ID":"19385955-6e16-4c6c-84f6-8bf35bfefe25","Type":"ContainerStarted","Data":"65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5"} Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.528990 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-x54rv" podUID="19385955-6e16-4c6c-84f6-8bf35bfefe25" containerName="registry-server" containerID="cri-o://65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5" gracePeriod=2 Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.560298 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x54rv" podStartSLOduration=2.05696194 podStartE2EDuration="4.560271583s" podCreationTimestamp="2025-11-28 17:14:24 +0000 UTC" firstStartedPulling="2025-11-28 17:14:25.667481895 +0000 UTC m=+954.925781940" lastFinishedPulling="2025-11-28 17:14:28.170791538 +0000 UTC m=+957.429091583" observedRunningTime="2025-11-28 17:14:28.549579194 +0000 UTC m=+957.807879289" watchObservedRunningTime="2025-11-28 17:14:28.560271583 +0000 UTC m=+957.818571628" Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.585011 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vf62d\" (UniqueName: \"kubernetes.io/projected/102b7cf3-c9f4-47f9-8472-b3659a7c9b4a-kube-api-access-vf62d\") pod \"openstack-operator-index-dlp9m\" (UID: \"102b7cf3-c9f4-47f9-8472-b3659a7c9b4a\") " pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.605192 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vf62d\" (UniqueName: \"kubernetes.io/projected/102b7cf3-c9f4-47f9-8472-b3659a7c9b4a-kube-api-access-vf62d\") pod \"openstack-operator-index-dlp9m\" (UID: \"102b7cf3-c9f4-47f9-8472-b3659a7c9b4a\") " pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.714661 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.917356 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x54rv" Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.991401 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggxkh\" (UniqueName: \"kubernetes.io/projected/19385955-6e16-4c6c-84f6-8bf35bfefe25-kube-api-access-ggxkh\") pod \"19385955-6e16-4c6c-84f6-8bf35bfefe25\" (UID: \"19385955-6e16-4c6c-84f6-8bf35bfefe25\") " Nov 28 17:14:28 crc kubenswrapper[4710]: I1128 17:14:28.996387 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19385955-6e16-4c6c-84f6-8bf35bfefe25-kube-api-access-ggxkh" (OuterVolumeSpecName: "kube-api-access-ggxkh") pod "19385955-6e16-4c6c-84f6-8bf35bfefe25" (UID: "19385955-6e16-4c6c-84f6-8bf35bfefe25"). InnerVolumeSpecName "kube-api-access-ggxkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.093137 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggxkh\" (UniqueName: \"kubernetes.io/projected/19385955-6e16-4c6c-84f6-8bf35bfefe25-kube-api-access-ggxkh\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.153060 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-dlp9m"] Nov 28 17:14:29 crc kubenswrapper[4710]: W1128 17:14:29.154795 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod102b7cf3_c9f4_47f9_8472_b3659a7c9b4a.slice/crio-43cf3fc3a6330a9b813f532140b640368098d75672845e449e23dae678593942 WatchSource:0}: Error finding container 43cf3fc3a6330a9b813f532140b640368098d75672845e449e23dae678593942: Status 404 returned error can't find the container with id 43cf3fc3a6330a9b813f532140b640368098d75672845e449e23dae678593942 Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.537582 4710 generic.go:334] "Generic (PLEG): container finished" podID="19385955-6e16-4c6c-84f6-8bf35bfefe25" containerID="65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5" exitCode=0 Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.537651 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x54rv" event={"ID":"19385955-6e16-4c6c-84f6-8bf35bfefe25","Type":"ContainerDied","Data":"65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5"} Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.537677 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x54rv" event={"ID":"19385955-6e16-4c6c-84f6-8bf35bfefe25","Type":"ContainerDied","Data":"3f977787a6caa44fe813132c61457d55d245f8ee90b4834b551449fde998d46a"} Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.537686 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x54rv" Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.537730 4710 scope.go:117] "RemoveContainer" containerID="65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5" Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.539233 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dlp9m" event={"ID":"102b7cf3-c9f4-47f9-8472-b3659a7c9b4a","Type":"ContainerStarted","Data":"43cf3fc3a6330a9b813f532140b640368098d75672845e449e23dae678593942"} Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.563011 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-x54rv"] Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.569356 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-x54rv"] Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.579355 4710 scope.go:117] "RemoveContainer" containerID="65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5" Nov 28 17:14:29 crc kubenswrapper[4710]: E1128 17:14:29.585467 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5\": container with ID starting with 65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5 not found: ID does not exist" containerID="65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5" Nov 28 17:14:29 crc kubenswrapper[4710]: I1128 17:14:29.585519 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5"} err="failed to get container status \"65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5\": rpc error: code = NotFound desc = could not find container \"65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5\": container with ID starting with 65bfe0ad4c74cdd01dfba0eebcaf01a2b00fcd6eefd0aebdff2697e7c1db15c5 not found: ID does not exist" Nov 28 17:14:30 crc kubenswrapper[4710]: I1128 17:14:30.547267 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-dlp9m" event={"ID":"102b7cf3-c9f4-47f9-8472-b3659a7c9b4a","Type":"ContainerStarted","Data":"7654320c65432d7a836f4203255c61c5bdf26cb5e19616e013f45550fbb9b096"} Nov 28 17:14:30 crc kubenswrapper[4710]: I1128 17:14:30.561315 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-dlp9m" podStartSLOduration=1.999196701 podStartE2EDuration="2.561293659s" podCreationTimestamp="2025-11-28 17:14:28 +0000 UTC" firstStartedPulling="2025-11-28 17:14:29.158734483 +0000 UTC m=+958.417034538" lastFinishedPulling="2025-11-28 17:14:29.720831431 +0000 UTC m=+958.979131496" observedRunningTime="2025-11-28 17:14:30.560536715 +0000 UTC m=+959.818836770" watchObservedRunningTime="2025-11-28 17:14:30.561293659 +0000 UTC m=+959.819593714" Nov 28 17:14:31 crc kubenswrapper[4710]: I1128 17:14:31.035819 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-pj7zp" Nov 28 17:14:31 crc kubenswrapper[4710]: I1128 17:14:31.153129 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19385955-6e16-4c6c-84f6-8bf35bfefe25" path="/var/lib/kubelet/pods/19385955-6e16-4c6c-84f6-8bf35bfefe25/volumes" Nov 28 17:14:38 crc kubenswrapper[4710]: I1128 17:14:38.715431 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:38 crc kubenswrapper[4710]: I1128 17:14:38.715991 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:38 crc kubenswrapper[4710]: I1128 17:14:38.753810 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:39 crc kubenswrapper[4710]: I1128 17:14:39.648444 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-dlp9m" Nov 28 17:14:40 crc kubenswrapper[4710]: I1128 17:14:40.430135 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-7t69j" Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.812613 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6"] Nov 28 17:14:41 crc kubenswrapper[4710]: E1128 17:14:41.812936 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19385955-6e16-4c6c-84f6-8bf35bfefe25" containerName="registry-server" Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.812948 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="19385955-6e16-4c6c-84f6-8bf35bfefe25" containerName="registry-server" Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.813079 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="19385955-6e16-4c6c-84f6-8bf35bfefe25" containerName="registry-server" Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.814038 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.816402 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wc9sb" Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.837672 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6"] Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.906221 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-bundle\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.906324 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-util\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:41 crc kubenswrapper[4710]: I1128 17:14:41.906530 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcpb7\" (UniqueName: \"kubernetes.io/projected/36ceecc9-0707-4f74-aa62-94ffa7887814-kube-api-access-wcpb7\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.008337 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-bundle\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.008419 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-util\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.008486 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcpb7\" (UniqueName: \"kubernetes.io/projected/36ceecc9-0707-4f74-aa62-94ffa7887814-kube-api-access-wcpb7\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.009013 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-bundle\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.009249 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-util\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.028456 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcpb7\" (UniqueName: \"kubernetes.io/projected/36ceecc9-0707-4f74-aa62-94ffa7887814-kube-api-access-wcpb7\") pod \"ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.135002 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.568712 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6"] Nov 28 17:14:42 crc kubenswrapper[4710]: W1128 17:14:42.575680 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ceecc9_0707_4f74_aa62_94ffa7887814.slice/crio-7786af9966c6a3e7f6b00f723901587c2ab4d5adb114e3f74adb27ddabc14358 WatchSource:0}: Error finding container 7786af9966c6a3e7f6b00f723901587c2ab4d5adb114e3f74adb27ddabc14358: Status 404 returned error can't find the container with id 7786af9966c6a3e7f6b00f723901587c2ab4d5adb114e3f74adb27ddabc14358 Nov 28 17:14:42 crc kubenswrapper[4710]: I1128 17:14:42.633730 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" event={"ID":"36ceecc9-0707-4f74-aa62-94ffa7887814","Type":"ContainerStarted","Data":"7786af9966c6a3e7f6b00f723901587c2ab4d5adb114e3f74adb27ddabc14358"} Nov 28 17:14:43 crc kubenswrapper[4710]: I1128 17:14:43.343695 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:14:43 crc kubenswrapper[4710]: I1128 17:14:43.344185 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:14:43 crc kubenswrapper[4710]: I1128 17:14:43.344258 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:14:43 crc kubenswrapper[4710]: I1128 17:14:43.345257 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"739dbee0820156a6554c32a8264c90cabd429c04c249177fc7347cfeddb379ed"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:14:43 crc kubenswrapper[4710]: I1128 17:14:43.345329 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://739dbee0820156a6554c32a8264c90cabd429c04c249177fc7347cfeddb379ed" gracePeriod=600 Nov 28 17:14:45 crc kubenswrapper[4710]: I1128 17:14:45.668849 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="739dbee0820156a6554c32a8264c90cabd429c04c249177fc7347cfeddb379ed" exitCode=0 Nov 28 17:14:45 crc kubenswrapper[4710]: I1128 17:14:45.668910 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"739dbee0820156a6554c32a8264c90cabd429c04c249177fc7347cfeddb379ed"} Nov 28 17:14:45 crc kubenswrapper[4710]: I1128 17:14:45.669263 4710 scope.go:117] "RemoveContainer" containerID="c6d85207656f6d2601d2bdd070cb40b8f4df58d52a8f16d4308eea97c4776e87" Nov 28 17:14:45 crc kubenswrapper[4710]: I1128 17:14:45.671445 4710 generic.go:334] "Generic (PLEG): container finished" podID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerID="d2efd015d5602023c86ed742453ad848b86496f1c17846f49ed9584fa939cd1a" exitCode=0 Nov 28 17:14:45 crc kubenswrapper[4710]: I1128 17:14:45.671485 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" event={"ID":"36ceecc9-0707-4f74-aa62-94ffa7887814","Type":"ContainerDied","Data":"d2efd015d5602023c86ed742453ad848b86496f1c17846f49ed9584fa939cd1a"} Nov 28 17:14:46 crc kubenswrapper[4710]: I1128 17:14:46.684040 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"fb26b81e49ab86b80e712b9b1ccbaa329c394a8a23985c1f1e0d00b07d836649"} Nov 28 17:14:47 crc kubenswrapper[4710]: I1128 17:14:47.692539 4710 generic.go:334] "Generic (PLEG): container finished" podID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerID="9b3aa4f5be1163eb7e9a0d27507cba56951c526d4e4e5757c695ebe14e216afa" exitCode=0 Nov 28 17:14:47 crc kubenswrapper[4710]: I1128 17:14:47.692591 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" event={"ID":"36ceecc9-0707-4f74-aa62-94ffa7887814","Type":"ContainerDied","Data":"9b3aa4f5be1163eb7e9a0d27507cba56951c526d4e4e5757c695ebe14e216afa"} Nov 28 17:14:48 crc kubenswrapper[4710]: I1128 17:14:48.700725 4710 generic.go:334] "Generic (PLEG): container finished" podID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerID="b90c5f1b8193e2e71e22df54e3f120e8834dbf8f49ce2de559dcd8cd29b092a1" exitCode=0 Nov 28 17:14:48 crc kubenswrapper[4710]: I1128 17:14:48.700809 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" event={"ID":"36ceecc9-0707-4f74-aa62-94ffa7887814","Type":"ContainerDied","Data":"b90c5f1b8193e2e71e22df54e3f120e8834dbf8f49ce2de559dcd8cd29b092a1"} Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.017986 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.092401 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-bundle\") pod \"36ceecc9-0707-4f74-aa62-94ffa7887814\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.092732 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-util\") pod \"36ceecc9-0707-4f74-aa62-94ffa7887814\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.092771 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcpb7\" (UniqueName: \"kubernetes.io/projected/36ceecc9-0707-4f74-aa62-94ffa7887814-kube-api-access-wcpb7\") pod \"36ceecc9-0707-4f74-aa62-94ffa7887814\" (UID: \"36ceecc9-0707-4f74-aa62-94ffa7887814\") " Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.097079 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-bundle" (OuterVolumeSpecName: "bundle") pod "36ceecc9-0707-4f74-aa62-94ffa7887814" (UID: "36ceecc9-0707-4f74-aa62-94ffa7887814"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.100721 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ceecc9-0707-4f74-aa62-94ffa7887814-kube-api-access-wcpb7" (OuterVolumeSpecName: "kube-api-access-wcpb7") pod "36ceecc9-0707-4f74-aa62-94ffa7887814" (UID: "36ceecc9-0707-4f74-aa62-94ffa7887814"). InnerVolumeSpecName "kube-api-access-wcpb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.106495 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-util" (OuterVolumeSpecName: "util") pod "36ceecc9-0707-4f74-aa62-94ffa7887814" (UID: "36ceecc9-0707-4f74-aa62-94ffa7887814"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.194415 4710 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.194447 4710 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/36ceecc9-0707-4f74-aa62-94ffa7887814-util\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.194457 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcpb7\" (UniqueName: \"kubernetes.io/projected/36ceecc9-0707-4f74-aa62-94ffa7887814-kube-api-access-wcpb7\") on node \"crc\" DevicePath \"\"" Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.715878 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" event={"ID":"36ceecc9-0707-4f74-aa62-94ffa7887814","Type":"ContainerDied","Data":"7786af9966c6a3e7f6b00f723901587c2ab4d5adb114e3f74adb27ddabc14358"} Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.715913 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7786af9966c6a3e7f6b00f723901587c2ab4d5adb114e3f74adb27ddabc14358" Nov 28 17:14:50 crc kubenswrapper[4710]: I1128 17:14:50.715935 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.733731 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn"] Nov 28 17:14:53 crc kubenswrapper[4710]: E1128 17:14:53.734450 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerName="util" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.734468 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerName="util" Nov 28 17:14:53 crc kubenswrapper[4710]: E1128 17:14:53.734489 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerName="extract" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.734496 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerName="extract" Nov 28 17:14:53 crc kubenswrapper[4710]: E1128 17:14:53.734507 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerName="pull" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.734516 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerName="pull" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.734739 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ceecc9-0707-4f74-aa62-94ffa7887814" containerName="extract" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.735388 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.739288 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-n48zr" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.765951 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn"] Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.892257 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkhjf\" (UniqueName: \"kubernetes.io/projected/01c82b0a-0363-428f-83ad-77949cd978cb-kube-api-access-rkhjf\") pod \"openstack-operator-controller-operator-96cfcb97f-22bhn\" (UID: \"01c82b0a-0363-428f-83ad-77949cd978cb\") " pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" Nov 28 17:14:53 crc kubenswrapper[4710]: I1128 17:14:53.994444 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkhjf\" (UniqueName: \"kubernetes.io/projected/01c82b0a-0363-428f-83ad-77949cd978cb-kube-api-access-rkhjf\") pod \"openstack-operator-controller-operator-96cfcb97f-22bhn\" (UID: \"01c82b0a-0363-428f-83ad-77949cd978cb\") " pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" Nov 28 17:14:54 crc kubenswrapper[4710]: I1128 17:14:54.012470 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkhjf\" (UniqueName: \"kubernetes.io/projected/01c82b0a-0363-428f-83ad-77949cd978cb-kube-api-access-rkhjf\") pod \"openstack-operator-controller-operator-96cfcb97f-22bhn\" (UID: \"01c82b0a-0363-428f-83ad-77949cd978cb\") " pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" Nov 28 17:14:54 crc kubenswrapper[4710]: I1128 17:14:54.078516 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" Nov 28 17:14:54 crc kubenswrapper[4710]: I1128 17:14:54.330008 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn"] Nov 28 17:14:54 crc kubenswrapper[4710]: W1128 17:14:54.334879 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01c82b0a_0363_428f_83ad_77949cd978cb.slice/crio-912c937c9f36e81a406fb2b9456a43dcde3aac7a8db1fc322fbbdab468de4e46 WatchSource:0}: Error finding container 912c937c9f36e81a406fb2b9456a43dcde3aac7a8db1fc322fbbdab468de4e46: Status 404 returned error can't find the container with id 912c937c9f36e81a406fb2b9456a43dcde3aac7a8db1fc322fbbdab468de4e46 Nov 28 17:14:54 crc kubenswrapper[4710]: I1128 17:14:54.745159 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" event={"ID":"01c82b0a-0363-428f-83ad-77949cd978cb","Type":"ContainerStarted","Data":"912c937c9f36e81a406fb2b9456a43dcde3aac7a8db1fc322fbbdab468de4e46"} Nov 28 17:14:59 crc kubenswrapper[4710]: I1128 17:14:59.779853 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" event={"ID":"01c82b0a-0363-428f-83ad-77949cd978cb","Type":"ContainerStarted","Data":"cc7b4b1c99d7fe8336a336fd064e4de6cfb59f789fb2d1a6d673f4268a421b53"} Nov 28 17:14:59 crc kubenswrapper[4710]: I1128 17:14:59.780233 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" Nov 28 17:14:59 crc kubenswrapper[4710]: I1128 17:14:59.825775 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" podStartSLOduration=2.410482449 podStartE2EDuration="6.825733598s" podCreationTimestamp="2025-11-28 17:14:53 +0000 UTC" firstStartedPulling="2025-11-28 17:14:54.337061228 +0000 UTC m=+983.595361273" lastFinishedPulling="2025-11-28 17:14:58.752312377 +0000 UTC m=+988.010612422" observedRunningTime="2025-11-28 17:14:59.818653193 +0000 UTC m=+989.076953278" watchObservedRunningTime="2025-11-28 17:14:59.825733598 +0000 UTC m=+989.084033633" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.156599 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5"] Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.158813 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.161286 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.161994 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.165202 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5"] Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.201900 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2pj\" (UniqueName: \"kubernetes.io/projected/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-kube-api-access-7p2pj\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.201942 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-config-volume\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.202035 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-secret-volume\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.305183 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-config-volume\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.309018 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-config-volume\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.309106 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p2pj\" (UniqueName: \"kubernetes.io/projected/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-kube-api-access-7p2pj\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.309230 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-secret-volume\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.316466 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-secret-volume\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.331721 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p2pj\" (UniqueName: \"kubernetes.io/projected/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-kube-api-access-7p2pj\") pod \"collect-profiles-29405835-wptj5\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.476039 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:00 crc kubenswrapper[4710]: I1128 17:15:00.918965 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5"] Nov 28 17:15:01 crc kubenswrapper[4710]: I1128 17:15:01.798452 4710 generic.go:334] "Generic (PLEG): container finished" podID="1ad9606e-ed8b-4be2-b066-4b9bc7935a85" containerID="9eead0610ace5731b807fc23aaf441d559113844a215e40c6a8f1a18fb4b157f" exitCode=0 Nov 28 17:15:01 crc kubenswrapper[4710]: I1128 17:15:01.798536 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" event={"ID":"1ad9606e-ed8b-4be2-b066-4b9bc7935a85","Type":"ContainerDied","Data":"9eead0610ace5731b807fc23aaf441d559113844a215e40c6a8f1a18fb4b157f"} Nov 28 17:15:01 crc kubenswrapper[4710]: I1128 17:15:01.798831 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" event={"ID":"1ad9606e-ed8b-4be2-b066-4b9bc7935a85","Type":"ContainerStarted","Data":"06455170edeaf0618aaa08670b659fb85b5849dceece103170dabdf3a8899e31"} Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.083699 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.251298 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p2pj\" (UniqueName: \"kubernetes.io/projected/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-kube-api-access-7p2pj\") pod \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.251414 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-config-volume\") pod \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.251613 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-secret-volume\") pod \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\" (UID: \"1ad9606e-ed8b-4be2-b066-4b9bc7935a85\") " Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.252192 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-config-volume" (OuterVolumeSpecName: "config-volume") pod "1ad9606e-ed8b-4be2-b066-4b9bc7935a85" (UID: "1ad9606e-ed8b-4be2-b066-4b9bc7935a85"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.257797 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-kube-api-access-7p2pj" (OuterVolumeSpecName: "kube-api-access-7p2pj") pod "1ad9606e-ed8b-4be2-b066-4b9bc7935a85" (UID: "1ad9606e-ed8b-4be2-b066-4b9bc7935a85"). InnerVolumeSpecName "kube-api-access-7p2pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.261922 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1ad9606e-ed8b-4be2-b066-4b9bc7935a85" (UID: "1ad9606e-ed8b-4be2-b066-4b9bc7935a85"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.353810 4710 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.353853 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p2pj\" (UniqueName: \"kubernetes.io/projected/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-kube-api-access-7p2pj\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.353863 4710 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ad9606e-ed8b-4be2-b066-4b9bc7935a85-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.813552 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" event={"ID":"1ad9606e-ed8b-4be2-b066-4b9bc7935a85","Type":"ContainerDied","Data":"06455170edeaf0618aaa08670b659fb85b5849dceece103170dabdf3a8899e31"} Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.813935 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06455170edeaf0618aaa08670b659fb85b5849dceece103170dabdf3a8899e31" Nov 28 17:15:03 crc kubenswrapper[4710]: I1128 17:15:03.813636 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5" Nov 28 17:15:04 crc kubenswrapper[4710]: I1128 17:15:04.081525 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-96cfcb97f-22bhn" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.399711 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc"] Nov 28 17:15:23 crc kubenswrapper[4710]: E1128 17:15:23.400725 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ad9606e-ed8b-4be2-b066-4b9bc7935a85" containerName="collect-profiles" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.400745 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ad9606e-ed8b-4be2-b066-4b9bc7935a85" containerName="collect-profiles" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.401156 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ad9606e-ed8b-4be2-b066-4b9bc7935a85" containerName="collect-profiles" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.402377 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.405398 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xlpp8" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.410434 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.412173 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.415546 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-gdlkb" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.421842 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.432881 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.448014 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.449297 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.454179 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-c5svf" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.458070 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.459453 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.463328 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-jpkp7" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.474922 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.484068 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.490625 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.492247 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.495733 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-rblbl" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.506815 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.515625 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.520226 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.523919 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.528234 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-ffmwc" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.532741 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-sns94"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.534363 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.543622 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.543966 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qr6m8" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.549429 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-sns94"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.566453 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.569016 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.572380 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-hxjk7" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.576234 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2dnf\" (UniqueName: \"kubernetes.io/projected/bafb8518-b399-4fe2-9577-8bb606450832-kube-api-access-x2dnf\") pod \"designate-operator-controller-manager-78b4bc895b-q8vpd\" (UID: \"bafb8518-b399-4fe2-9577-8bb606450832\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.576329 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gzg5\" (UniqueName: \"kubernetes.io/projected/a70892da-8396-4018-89e0-f25e7221e674-kube-api-access-6gzg5\") pod \"cinder-operator-controller-manager-859b6ccc6-7hsvg\" (UID: \"a70892da-8396-4018-89e0-f25e7221e674\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.576355 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlg89\" (UniqueName: \"kubernetes.io/projected/98f1d4c3-68b2-42b6-bbfa-e8aaec209764-kube-api-access-mlg89\") pod \"barbican-operator-controller-manager-7d9dfd778-s7xmc\" (UID: \"98f1d4c3-68b2-42b6-bbfa-e8aaec209764\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.576387 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgjk\" (UniqueName: \"kubernetes.io/projected/377d6817-3f41-4bba-9078-fa77dcdb9591-kube-api-access-lxgjk\") pod \"glance-operator-controller-manager-668d9c48b9-xxmrh\" (UID: \"377d6817-3f41-4bba-9078-fa77dcdb9591\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.581673 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.585836 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.591297 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.591380 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-t9s72" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.598423 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.610066 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.611669 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.617478 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-pjpt7" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.635715 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.681666 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p4z2\" (UniqueName: \"kubernetes.io/projected/baf8a76b-04b8-45d7-83b8-49ab823f2af1-kube-api-access-5p4z2\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.681749 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rmpj\" (UniqueName: \"kubernetes.io/projected/6ebfa717-92f8-4563-9456-644d1c107d6b-kube-api-access-8rmpj\") pod \"horizon-operator-controller-manager-68c6d99b8f-2gpds\" (UID: \"6ebfa717-92f8-4563-9456-644d1c107d6b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.681839 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnbv8\" (UniqueName: \"kubernetes.io/projected/448f2efe-7d9c-476e-af1c-3ebf62e2b6cb-kube-api-access-vnbv8\") pod \"heat-operator-controller-manager-5f64f6f8bb-sbhc4\" (UID: \"448f2efe-7d9c-476e-af1c-3ebf62e2b6cb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.681960 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gzg5\" (UniqueName: \"kubernetes.io/projected/a70892da-8396-4018-89e0-f25e7221e674-kube-api-access-6gzg5\") pod \"cinder-operator-controller-manager-859b6ccc6-7hsvg\" (UID: \"a70892da-8396-4018-89e0-f25e7221e674\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.682017 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlg89\" (UniqueName: \"kubernetes.io/projected/98f1d4c3-68b2-42b6-bbfa-e8aaec209764-kube-api-access-mlg89\") pod \"barbican-operator-controller-manager-7d9dfd778-s7xmc\" (UID: \"98f1d4c3-68b2-42b6-bbfa-e8aaec209764\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.682052 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.682113 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vt6q\" (UniqueName: \"kubernetes.io/projected/a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf-kube-api-access-2vt6q\") pod \"ironic-operator-controller-manager-6c548fd776-tkzbw\" (UID: \"a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.682150 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgjk\" (UniqueName: \"kubernetes.io/projected/377d6817-3f41-4bba-9078-fa77dcdb9591-kube-api-access-lxgjk\") pod \"glance-operator-controller-manager-668d9c48b9-xxmrh\" (UID: \"377d6817-3f41-4bba-9078-fa77dcdb9591\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.682798 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tqzw\" (UniqueName: \"kubernetes.io/projected/81c851e8-e354-40c6-84cf-264f22be561f-kube-api-access-9tqzw\") pod \"keystone-operator-controller-manager-546d4bdf48-6h9mk\" (UID: \"81c851e8-e354-40c6-84cf-264f22be561f\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.683006 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2dnf\" (UniqueName: \"kubernetes.io/projected/bafb8518-b399-4fe2-9577-8bb606450832-kube-api-access-x2dnf\") pod \"designate-operator-controller-manager-78b4bc895b-q8vpd\" (UID: \"bafb8518-b399-4fe2-9577-8bb606450832\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.693238 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.705201 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.712368 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-wxbwl" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.716043 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2dnf\" (UniqueName: \"kubernetes.io/projected/bafb8518-b399-4fe2-9577-8bb606450832-kube-api-access-x2dnf\") pod \"designate-operator-controller-manager-78b4bc895b-q8vpd\" (UID: \"bafb8518-b399-4fe2-9577-8bb606450832\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.716325 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gzg5\" (UniqueName: \"kubernetes.io/projected/a70892da-8396-4018-89e0-f25e7221e674-kube-api-access-6gzg5\") pod \"cinder-operator-controller-manager-859b6ccc6-7hsvg\" (UID: \"a70892da-8396-4018-89e0-f25e7221e674\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.725854 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.735720 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.737176 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlg89\" (UniqueName: \"kubernetes.io/projected/98f1d4c3-68b2-42b6-bbfa-e8aaec209764-kube-api-access-mlg89\") pod \"barbican-operator-controller-manager-7d9dfd778-s7xmc\" (UID: \"98f1d4c3-68b2-42b6-bbfa-e8aaec209764\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.756938 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-867v6"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.758700 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.761307 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgjk\" (UniqueName: \"kubernetes.io/projected/377d6817-3f41-4bba-9078-fa77dcdb9591-kube-api-access-lxgjk\") pod \"glance-operator-controller-manager-668d9c48b9-xxmrh\" (UID: \"377d6817-3f41-4bba-9078-fa77dcdb9591\") " pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.761689 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kdz4b" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.767514 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.785666 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.786892 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.787163 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p4z2\" (UniqueName: \"kubernetes.io/projected/baf8a76b-04b8-45d7-83b8-49ab823f2af1-kube-api-access-5p4z2\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.787225 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rmpj\" (UniqueName: \"kubernetes.io/projected/6ebfa717-92f8-4563-9456-644d1c107d6b-kube-api-access-8rmpj\") pod \"horizon-operator-controller-manager-68c6d99b8f-2gpds\" (UID: \"6ebfa717-92f8-4563-9456-644d1c107d6b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.787292 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnbv8\" (UniqueName: \"kubernetes.io/projected/448f2efe-7d9c-476e-af1c-3ebf62e2b6cb-kube-api-access-vnbv8\") pod \"heat-operator-controller-manager-5f64f6f8bb-sbhc4\" (UID: \"448f2efe-7d9c-476e-af1c-3ebf62e2b6cb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.787350 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.787400 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2chk\" (UniqueName: \"kubernetes.io/projected/a66ff16d-f7e8-42d1-9b40-e992fd3aabb2-kube-api-access-s2chk\") pod \"manila-operator-controller-manager-6546668bfd-bcg9d\" (UID: \"a66ff16d-f7e8-42d1-9b40-e992fd3aabb2\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.787440 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vt6q\" (UniqueName: \"kubernetes.io/projected/a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf-kube-api-access-2vt6q\") pod \"ironic-operator-controller-manager-6c548fd776-tkzbw\" (UID: \"a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.787474 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tqzw\" (UniqueName: \"kubernetes.io/projected/81c851e8-e354-40c6-84cf-264f22be561f-kube-api-access-9tqzw\") pod \"keystone-operator-controller-manager-546d4bdf48-6h9mk\" (UID: \"81c851e8-e354-40c6-84cf-264f22be561f\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" Nov 28 17:15:23 crc kubenswrapper[4710]: E1128 17:15:23.787570 4710 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:23 crc kubenswrapper[4710]: E1128 17:15:23.787625 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert podName:baf8a76b-04b8-45d7-83b8-49ab823f2af1 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:24.287604624 +0000 UTC m=+1013.545904739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert") pod "infra-operator-controller-manager-57548d458d-sns94" (UID: "baf8a76b-04b8-45d7-83b8-49ab823f2af1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.787920 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.812440 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-8cqpp" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.848269 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vt6q\" (UniqueName: \"kubernetes.io/projected/a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf-kube-api-access-2vt6q\") pod \"ironic-operator-controller-manager-6c548fd776-tkzbw\" (UID: \"a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.848328 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p4z2\" (UniqueName: \"kubernetes.io/projected/baf8a76b-04b8-45d7-83b8-49ab823f2af1-kube-api-access-5p4z2\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.848637 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rmpj\" (UniqueName: \"kubernetes.io/projected/6ebfa717-92f8-4563-9456-644d1c107d6b-kube-api-access-8rmpj\") pod \"horizon-operator-controller-manager-68c6d99b8f-2gpds\" (UID: \"6ebfa717-92f8-4563-9456-644d1c107d6b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.850804 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-867v6"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.856494 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tqzw\" (UniqueName: \"kubernetes.io/projected/81c851e8-e354-40c6-84cf-264f22be561f-kube-api-access-9tqzw\") pod \"keystone-operator-controller-manager-546d4bdf48-6h9mk\" (UID: \"81c851e8-e354-40c6-84cf-264f22be561f\") " pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.864155 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.866786 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnbv8\" (UniqueName: \"kubernetes.io/projected/448f2efe-7d9c-476e-af1c-3ebf62e2b6cb-kube-api-access-vnbv8\") pod \"heat-operator-controller-manager-5f64f6f8bb-sbhc4\" (UID: \"448f2efe-7d9c-476e-af1c-3ebf62e2b6cb\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.876986 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.889200 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2chk\" (UniqueName: \"kubernetes.io/projected/a66ff16d-f7e8-42d1-9b40-e992fd3aabb2-kube-api-access-s2chk\") pod \"manila-operator-controller-manager-6546668bfd-bcg9d\" (UID: \"a66ff16d-f7e8-42d1-9b40-e992fd3aabb2\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.889254 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4dk\" (UniqueName: \"kubernetes.io/projected/92a0ce9b-b234-4954-bf20-890fa1a6785d-kube-api-access-qp4dk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hsntq\" (UID: \"92a0ce9b-b234-4954-bf20-890fa1a6785d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.889290 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5k7v\" (UniqueName: \"kubernetes.io/projected/faacb861-2d5b-4629-8c6b-ae9427266b7b-kube-api-access-s5k7v\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-wd77l\" (UID: \"faacb861-2d5b-4629-8c6b-ae9427266b7b\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.889324 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zht\" (UniqueName: \"kubernetes.io/projected/b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c-kube-api-access-h4zht\") pod \"nova-operator-controller-manager-697bc559fc-867v6\" (UID: \"b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.911443 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.915184 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.916771 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2chk\" (UniqueName: \"kubernetes.io/projected/a66ff16d-f7e8-42d1-9b40-e992fd3aabb2-kube-api-access-s2chk\") pod \"manila-operator-controller-manager-6546668bfd-bcg9d\" (UID: \"a66ff16d-f7e8-42d1-9b40-e992fd3aabb2\") " pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.922400 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.924479 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.924569 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.934302 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-gcmf5" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.945841 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.951027 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.952274 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.955284 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.955508 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jjlck" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.984668 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.986634 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.995021 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-z2ct2" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.995169 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx"] Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.996024 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp4dk\" (UniqueName: \"kubernetes.io/projected/92a0ce9b-b234-4954-bf20-890fa1a6785d-kube-api-access-qp4dk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hsntq\" (UID: \"92a0ce9b-b234-4954-bf20-890fa1a6785d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.996074 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5k7v\" (UniqueName: \"kubernetes.io/projected/faacb861-2d5b-4629-8c6b-ae9427266b7b-kube-api-access-s5k7v\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-wd77l\" (UID: \"faacb861-2d5b-4629-8c6b-ae9427266b7b\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" Nov 28 17:15:23 crc kubenswrapper[4710]: I1128 17:15:23.996113 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zht\" (UniqueName: \"kubernetes.io/projected/b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c-kube-api-access-h4zht\") pod \"nova-operator-controller-manager-697bc559fc-867v6\" (UID: \"b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.006827 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-45gjt"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.008229 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.012800 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-rjzv2" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.026950 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.027901 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.039911 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-45gjt"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.040680 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zht\" (UniqueName: \"kubernetes.io/projected/b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c-kube-api-access-h4zht\") pod \"nova-operator-controller-manager-697bc559fc-867v6\" (UID: \"b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.042124 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp4dk\" (UniqueName: \"kubernetes.io/projected/92a0ce9b-b234-4954-bf20-890fa1a6785d-kube-api-access-qp4dk\") pod \"mariadb-operator-controller-manager-56bbcc9d85-hsntq\" (UID: \"92a0ce9b-b234-4954-bf20-890fa1a6785d\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.046259 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5k7v\" (UniqueName: \"kubernetes.io/projected/faacb861-2d5b-4629-8c6b-ae9427266b7b-kube-api-access-s5k7v\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-wd77l\" (UID: \"faacb861-2d5b-4629-8c6b-ae9427266b7b\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.073970 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.079228 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.086923 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6n86w" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.100919 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f47b8\" (UniqueName: \"kubernetes.io/projected/3c2144e6-7894-4e16-9952-f4a4d848aa55-kube-api-access-f47b8\") pod \"ovn-operator-controller-manager-b6456fdb6-2c9kf\" (UID: \"3c2144e6-7894-4e16-9952-f4a4d848aa55\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.100968 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8k6\" (UniqueName: \"kubernetes.io/projected/5755fe75-0e8f-4b17-ab96-1efe5ace8c0f-kube-api-access-tb8k6\") pod \"placement-operator-controller-manager-78f8948974-45gjt\" (UID: \"5755fe75-0e8f-4b17-ab96-1efe5ace8c0f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.101012 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.101225 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtv5b\" (UniqueName: \"kubernetes.io/projected/5a6d5b4b-1460-41a8-a248-e814e32fb672-kube-api-access-dtv5b\") pod \"octavia-operator-controller-manager-998648c74-bjrnl\" (UID: \"5a6d5b4b-1460-41a8-a248-e814e32fb672\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.101312 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmfml\" (UniqueName: \"kubernetes.io/projected/ee89a2e2-f64c-4310-a271-8d4e7043279a-kube-api-access-jmfml\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.120586 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.144378 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.146702 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.150964 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qvftz" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.173007 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.187805 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.192635 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.208214 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f47b8\" (UniqueName: \"kubernetes.io/projected/3c2144e6-7894-4e16-9952-f4a4d848aa55-kube-api-access-f47b8\") pod \"ovn-operator-controller-manager-b6456fdb6-2c9kf\" (UID: \"3c2144e6-7894-4e16-9952-f4a4d848aa55\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.208277 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8k6\" (UniqueName: \"kubernetes.io/projected/5755fe75-0e8f-4b17-ab96-1efe5ace8c0f-kube-api-access-tb8k6\") pod \"placement-operator-controller-manager-78f8948974-45gjt\" (UID: \"5755fe75-0e8f-4b17-ab96-1efe5ace8c0f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.208348 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.208396 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtv5b\" (UniqueName: \"kubernetes.io/projected/5a6d5b4b-1460-41a8-a248-e814e32fb672-kube-api-access-dtv5b\") pod \"octavia-operator-controller-manager-998648c74-bjrnl\" (UID: \"5a6d5b4b-1460-41a8-a248-e814e32fb672\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.208477 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpxqj\" (UniqueName: \"kubernetes.io/projected/5c695701-bc1a-4210-87ca-9ee354e664bc-kube-api-access-qpxqj\") pod \"telemetry-operator-controller-manager-6b5d64d475-6p56z\" (UID: \"5c695701-bc1a-4210-87ca-9ee354e664bc\") " pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.208591 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmfml\" (UniqueName: \"kubernetes.io/projected/ee89a2e2-f64c-4310-a271-8d4e7043279a-kube-api-access-jmfml\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.208718 4710 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.208845 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert podName:ee89a2e2-f64c-4310-a271-8d4e7043279a nodeName:}" failed. No retries permitted until 2025-11-28 17:15:24.708826555 +0000 UTC m=+1013.967126600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" (UID: "ee89a2e2-f64c-4310-a271-8d4e7043279a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.215525 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.219067 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-22spv"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.221099 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.235158 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-pxt4v" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.235785 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-22spv"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.236179 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.238436 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8k6\" (UniqueName: \"kubernetes.io/projected/5755fe75-0e8f-4b17-ab96-1efe5ace8c0f-kube-api-access-tb8k6\") pod \"placement-operator-controller-manager-78f8948974-45gjt\" (UID: \"5755fe75-0e8f-4b17-ab96-1efe5ace8c0f\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.242635 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtv5b\" (UniqueName: \"kubernetes.io/projected/5a6d5b4b-1460-41a8-a248-e814e32fb672-kube-api-access-dtv5b\") pod \"octavia-operator-controller-manager-998648c74-bjrnl\" (UID: \"5a6d5b4b-1460-41a8-a248-e814e32fb672\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.243671 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmfml\" (UniqueName: \"kubernetes.io/projected/ee89a2e2-f64c-4310-a271-8d4e7043279a-kube-api-access-jmfml\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.256054 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.273455 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f47b8\" (UniqueName: \"kubernetes.io/projected/3c2144e6-7894-4e16-9952-f4a4d848aa55-kube-api-access-f47b8\") pod \"ovn-operator-controller-manager-b6456fdb6-2c9kf\" (UID: \"3c2144e6-7894-4e16-9952-f4a4d848aa55\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.280132 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.298419 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.301277 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-75f5m" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.314460 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9htv\" (UniqueName: \"kubernetes.io/projected/419588b7-987b-44f5-81fd-76451ba0eb2d-kube-api-access-v9htv\") pod \"swift-operator-controller-manager-5f8c65bbfc-hznck\" (UID: \"419588b7-987b-44f5-81fd-76451ba0eb2d\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.314746 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.314878 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpxqj\" (UniqueName: \"kubernetes.io/projected/5c695701-bc1a-4210-87ca-9ee354e664bc-kube-api-access-qpxqj\") pod \"telemetry-operator-controller-manager-6b5d64d475-6p56z\" (UID: \"5c695701-bc1a-4210-87ca-9ee354e664bc\") " pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.315000 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv4km\" (UniqueName: \"kubernetes.io/projected/b3e15c80-d7b6-4d62-9eff-011dee6d7b6e-kube-api-access-dv4km\") pod \"test-operator-controller-manager-5854674fcc-22spv\" (UID: \"b3e15c80-d7b6-4d62-9eff-011dee6d7b6e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.315100 4710 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.315199 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert podName:baf8a76b-04b8-45d7-83b8-49ab823f2af1 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:25.31516578 +0000 UTC m=+1014.573465825 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert") pod "infra-operator-controller-manager-57548d458d-sns94" (UID: "baf8a76b-04b8-45d7-83b8-49ab823f2af1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.335611 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.335624 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.347580 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpxqj\" (UniqueName: \"kubernetes.io/projected/5c695701-bc1a-4210-87ca-9ee354e664bc-kube-api-access-qpxqj\") pod \"telemetry-operator-controller-manager-6b5d64d475-6p56z\" (UID: \"5c695701-bc1a-4210-87ca-9ee354e664bc\") " pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.363917 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.398645 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.400161 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.402847 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.403117 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b27jc" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.403266 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.411915 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.420540 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv4km\" (UniqueName: \"kubernetes.io/projected/b3e15c80-d7b6-4d62-9eff-011dee6d7b6e-kube-api-access-dv4km\") pod \"test-operator-controller-manager-5854674fcc-22spv\" (UID: \"b3e15c80-d7b6-4d62-9eff-011dee6d7b6e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.420691 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9v6d\" (UniqueName: \"kubernetes.io/projected/e31192ae-8aa1-4376-a40b-4bd8e0e45928-kube-api-access-s9v6d\") pod \"watcher-operator-controller-manager-769dc69bc-rxp9t\" (UID: \"e31192ae-8aa1-4376-a40b-4bd8e0e45928\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.420733 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9htv\" (UniqueName: \"kubernetes.io/projected/419588b7-987b-44f5-81fd-76451ba0eb2d-kube-api-access-v9htv\") pod \"swift-operator-controller-manager-5f8c65bbfc-hznck\" (UID: \"419588b7-987b-44f5-81fd-76451ba0eb2d\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.421846 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.433696 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.435145 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.437458 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-msffc" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.445684 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.469259 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv4km\" (UniqueName: \"kubernetes.io/projected/b3e15c80-d7b6-4d62-9eff-011dee6d7b6e-kube-api-access-dv4km\") pod \"test-operator-controller-manager-5854674fcc-22spv\" (UID: \"b3e15c80-d7b6-4d62-9eff-011dee6d7b6e\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.475497 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9htv\" (UniqueName: \"kubernetes.io/projected/419588b7-987b-44f5-81fd-76451ba0eb2d-kube-api-access-v9htv\") pod \"swift-operator-controller-manager-5f8c65bbfc-hznck\" (UID: \"419588b7-987b-44f5-81fd-76451ba0eb2d\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.480424 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.522043 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.522233 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9v6d\" (UniqueName: \"kubernetes.io/projected/e31192ae-8aa1-4376-a40b-4bd8e0e45928-kube-api-access-s9v6d\") pod \"watcher-operator-controller-manager-769dc69bc-rxp9t\" (UID: \"e31192ae-8aa1-4376-a40b-4bd8e0e45928\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.522327 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7cs\" (UniqueName: \"kubernetes.io/projected/61cb335c-2597-42e6-aa4c-410d8881b903-kube-api-access-4d7cs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.522491 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2rs4\" (UniqueName: \"kubernetes.io/projected/e557836a-92e3-47e0-8a29-e02ab29a9aea-kube-api-access-g2rs4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z7ndb\" (UID: \"e557836a-92e3-47e0-8a29-e02ab29a9aea\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.522553 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.543403 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9v6d\" (UniqueName: \"kubernetes.io/projected/e31192ae-8aa1-4376-a40b-4bd8e0e45928-kube-api-access-s9v6d\") pod \"watcher-operator-controller-manager-769dc69bc-rxp9t\" (UID: \"e31192ae-8aa1-4376-a40b-4bd8e0e45928\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.575807 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.627780 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.627893 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.627953 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7cs\" (UniqueName: \"kubernetes.io/projected/61cb335c-2597-42e6-aa4c-410d8881b903-kube-api-access-4d7cs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.628431 4710 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.628487 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:25.128470855 +0000 UTC m=+1014.386770900 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "webhook-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.628656 4710 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.628696 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:25.128685082 +0000 UTC m=+1014.386985127 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "metrics-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.628902 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2rs4\" (UniqueName: \"kubernetes.io/projected/e557836a-92e3-47e0-8a29-e02ab29a9aea-kube-api-access-g2rs4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z7ndb\" (UID: \"e557836a-92e3-47e0-8a29-e02ab29a9aea\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.648306 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2rs4\" (UniqueName: \"kubernetes.io/projected/e557836a-92e3-47e0-8a29-e02ab29a9aea-kube-api-access-g2rs4\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z7ndb\" (UID: \"e557836a-92e3-47e0-8a29-e02ab29a9aea\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.649117 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7cs\" (UniqueName: \"kubernetes.io/projected/61cb335c-2597-42e6-aa4c-410d8881b903-kube-api-access-4d7cs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.672171 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.731086 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.731315 4710 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: E1128 17:15:24.731382 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert podName:ee89a2e2-f64c-4310-a271-8d4e7043279a nodeName:}" failed. No retries permitted until 2025-11-28 17:15:25.731363891 +0000 UTC m=+1014.989663936 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" (UID: "ee89a2e2-f64c-4310-a271-8d4e7043279a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.789521 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd"] Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.846325 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb" Nov 28 17:15:24 crc kubenswrapper[4710]: W1128 17:15:24.889078 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbafb8518_b399_4fe2_9577_8bb606450832.slice/crio-b1ef6ee1c671a2674436a1986c6d9b66b31a8fccc97ddcac22f7c06b1f3e9aeb WatchSource:0}: Error finding container b1ef6ee1c671a2674436a1986c6d9b66b31a8fccc97ddcac22f7c06b1f3e9aeb: Status 404 returned error can't find the container with id b1ef6ee1c671a2674436a1986c6d9b66b31a8fccc97ddcac22f7c06b1f3e9aeb Nov 28 17:15:24 crc kubenswrapper[4710]: I1128 17:15:24.983012 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" event={"ID":"bafb8518-b399-4fe2-9577-8bb606450832","Type":"ContainerStarted","Data":"b1ef6ee1c671a2674436a1986c6d9b66b31a8fccc97ddcac22f7c06b1f3e9aeb"} Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.095606 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.103967 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.132864 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.143652 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.143726 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.143981 4710 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.144032 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:26.144015231 +0000 UTC m=+1015.402315276 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "metrics-server-cert" not found Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.144444 4710 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.144489 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:26.144478836 +0000 UTC m=+1015.402778881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "webhook-server-cert" not found Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.263254 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq"] Nov 28 17:15:25 crc kubenswrapper[4710]: W1128 17:15:25.268722 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a0ce9b_b234_4954_bf20_890fa1a6785d.slice/crio-611a00cda929d44e013c3488827c8972143b5d6703895e01479f20f05be8a68b WatchSource:0}: Error finding container 611a00cda929d44e013c3488827c8972143b5d6703895e01479f20f05be8a68b: Status 404 returned error can't find the container with id 611a00cda929d44e013c3488827c8972143b5d6703895e01479f20f05be8a68b Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.277076 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.285890 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.321352 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.337007 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.348952 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.349889 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.350009 4710 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.350261 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert podName:baf8a76b-04b8-45d7-83b8-49ab823f2af1 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:27.350059651 +0000 UTC m=+1016.608359696 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert") pod "infra-operator-controller-manager-57548d458d-sns94" (UID: "baf8a76b-04b8-45d7-83b8-49ab823f2af1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.538187 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-867v6"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.548093 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl"] Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.548545 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s5k7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-wd77l_openstack-operators(faacb861-2d5b-4629-8c6b-ae9427266b7b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.551512 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s5k7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-wd77l_openstack-operators(faacb861-2d5b-4629-8c6b-ae9427266b7b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.557223 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" podUID="faacb861-2d5b-4629-8c6b-ae9427266b7b" Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.583841 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.587262 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l"] Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.600501 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-45gjt_openstack-operators(5755fe75-0e8f-4b17-ab96-1efe5ace8c0f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.601039 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-45gjt"] Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.604938 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-45gjt_openstack-operators(5755fe75-0e8f-4b17-ab96-1efe5ace8c0f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.606198 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" podUID="5755fe75-0e8f-4b17-ab96-1efe5ace8c0f" Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.764461 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-22spv"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.764752 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.765022 4710 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.765088 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert podName:ee89a2e2-f64c-4310-a271-8d4e7043279a nodeName:}" failed. No retries permitted until 2025-11-28 17:15:27.765069825 +0000 UTC m=+1017.023369870 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" (UID: "ee89a2e2-f64c-4310-a271-8d4e7043279a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.777392 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb"] Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.789378 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dv4km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-22spv_openstack-operators(b3e15c80-d7b6-4d62-9eff-011dee6d7b6e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.797445 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dv4km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-22spv_openstack-operators(b3e15c80-d7b6-4d62-9eff-011dee6d7b6e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.798815 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" podUID="b3e15c80-d7b6-4d62-9eff-011dee6d7b6e" Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.807522 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t"] Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.808806 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9v6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-rxp9t_openstack-operators(e31192ae-8aa1-4376-a40b-4bd8e0e45928): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.826277 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9v6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-rxp9t_openstack-operators(e31192ae-8aa1-4376-a40b-4bd8e0e45928): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:25 crc kubenswrapper[4710]: E1128 17:15:25.827710 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" podUID="e31192ae-8aa1-4376-a40b-4bd8e0e45928" Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.958593 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck"] Nov 28 17:15:25 crc kubenswrapper[4710]: I1128 17:15:25.982979 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z"] Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.011306 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.89:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qpxqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6b5d64d475-6p56z_openstack-operators(5c695701-bc1a-4210-87ca-9ee354e664bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.017683 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qpxqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-6b5d64d475-6p56z_openstack-operators(5c695701-bc1a-4210-87ca-9ee354e664bc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.018876 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" event={"ID":"3c2144e6-7894-4e16-9952-f4a4d848aa55","Type":"ContainerStarted","Data":"5729a736322a64810e4616c8cd41c5bd09ef0aec6fce0194de209278c0e55e95"} Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.019045 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" podUID="5c695701-bc1a-4210-87ca-9ee354e664bc" Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.029348 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" event={"ID":"92a0ce9b-b234-4954-bf20-890fa1a6785d","Type":"ContainerStarted","Data":"611a00cda929d44e013c3488827c8972143b5d6703895e01479f20f05be8a68b"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.030295 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb" event={"ID":"e557836a-92e3-47e0-8a29-e02ab29a9aea","Type":"ContainerStarted","Data":"75d653a0c01d8124c467302ec014e599bb9d9d990d5b96416cd49d563f74f05b"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.031156 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" event={"ID":"377d6817-3f41-4bba-9078-fa77dcdb9591","Type":"ContainerStarted","Data":"5693081236ac767c3e153eb4f119f3cb733fa796b85a99f7abaa2991fc38e0fd"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.031998 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" event={"ID":"b3e15c80-d7b6-4d62-9eff-011dee6d7b6e","Type":"ContainerStarted","Data":"8850559886a46b28dc16655835b767a2f9a346358cd0a49db1ddd84c6c875077"} Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.048372 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" podUID="b3e15c80-d7b6-4d62-9eff-011dee6d7b6e" Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.049030 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" event={"ID":"81c851e8-e354-40c6-84cf-264f22be561f","Type":"ContainerStarted","Data":"ea863deab7431ba1d770bc2f1b7fe2a86b3339a3c187b9decb6b053ca9e1e94f"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.056152 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" event={"ID":"a66ff16d-f7e8-42d1-9b40-e992fd3aabb2","Type":"ContainerStarted","Data":"3e7e432e3e303e2b5634b192e9c0623d0b6975ebda8a07a486198ad6a0afd123"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.060679 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" event={"ID":"6ebfa717-92f8-4563-9456-644d1c107d6b","Type":"ContainerStarted","Data":"32fc4a0bd8df28cc49edaedfce1baedf96763185e436658c97a8f0d00fc84b4b"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.072950 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" event={"ID":"e31192ae-8aa1-4376-a40b-4bd8e0e45928","Type":"ContainerStarted","Data":"8668e85f967be917b89117ffdace53f5ac98a0e4a02b315540b653f008a444e8"} Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.076703 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" podUID="e31192ae-8aa1-4376-a40b-4bd8e0e45928" Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.077296 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" event={"ID":"98f1d4c3-68b2-42b6-bbfa-e8aaec209764","Type":"ContainerStarted","Data":"5f3f45aee7a65d5312d220f3d7f1340ac8f7eda0be498da0fe89fe9f0c258e98"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.078916 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" event={"ID":"419588b7-987b-44f5-81fd-76451ba0eb2d","Type":"ContainerStarted","Data":"999c815163f7b07195466dd1d21af8d60d1f0833e69cee51e66ab045caeb2137"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.085887 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" event={"ID":"5a6d5b4b-1460-41a8-a248-e814e32fb672","Type":"ContainerStarted","Data":"f04c9daa04597716c24279d819a0fa5f93003daabcb2a178b8a137c4161721b3"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.095323 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" event={"ID":"a70892da-8396-4018-89e0-f25e7221e674","Type":"ContainerStarted","Data":"ef0782575d4d0a1e7d107baf57ab42acdfaf13cc1ae7515748b3db1a2a23c7e8"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.097269 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" event={"ID":"a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf","Type":"ContainerStarted","Data":"03081d4e84e3ee5bd6569cdade3e9e483406d05a11cc0dbbfed86cff4e853ff3"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.098439 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" event={"ID":"faacb861-2d5b-4629-8c6b-ae9427266b7b","Type":"ContainerStarted","Data":"a3719bbdfbb5a333c582d15a1852db07583a8a54c718c205956ea501de68c272"} Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.100823 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" podUID="faacb861-2d5b-4629-8c6b-ae9427266b7b" Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.101214 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" event={"ID":"b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c","Type":"ContainerStarted","Data":"4bc35a40817146715844bb6696468616f57aa2d657d553f53235580315a00938"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.105612 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" event={"ID":"448f2efe-7d9c-476e-af1c-3ebf62e2b6cb","Type":"ContainerStarted","Data":"42f0eedc67630ea60b4fd58e460760a5da1cf56e881884027551b0bc66072e82"} Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.109748 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" event={"ID":"5755fe75-0e8f-4b17-ab96-1efe5ace8c0f","Type":"ContainerStarted","Data":"c1fa0129f259c915f059cc7e3f24540291ed59fbda4954dd9fbc2507cb34d6fe"} Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.113500 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" podUID="5755fe75-0e8f-4b17-ab96-1efe5ace8c0f" Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.174226 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:26 crc kubenswrapper[4710]: I1128 17:15:26.174318 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.174353 4710 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.174453 4710 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.174732 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:28.174394779 +0000 UTC m=+1017.432694824 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "webhook-server-cert" not found Nov 28 17:15:26 crc kubenswrapper[4710]: E1128 17:15:26.174789 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:28.174780722 +0000 UTC m=+1017.433080767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "metrics-server-cert" not found Nov 28 17:15:27 crc kubenswrapper[4710]: I1128 17:15:27.118563 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" event={"ID":"5c695701-bc1a-4210-87ca-9ee354e664bc","Type":"ContainerStarted","Data":"62d5585a5311fd525487c8ab8856f94f8b33b6275f09f4f95e6a46f38c6cc6cf"} Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.120848 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" podUID="b3e15c80-d7b6-4d62-9eff-011dee6d7b6e" Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.120910 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" podUID="5755fe75-0e8f-4b17-ab96-1efe5ace8c0f" Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.121491 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" podUID="e31192ae-8aa1-4376-a40b-4bd8e0e45928" Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.121615 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" podUID="faacb861-2d5b-4629-8c6b-ae9427266b7b" Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.121649 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.89:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" podUID="5c695701-bc1a-4210-87ca-9ee354e664bc" Nov 28 17:15:27 crc kubenswrapper[4710]: I1128 17:15:27.396667 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.396878 4710 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.396932 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert podName:baf8a76b-04b8-45d7-83b8-49ab823f2af1 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:31.396916238 +0000 UTC m=+1020.655216283 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert") pod "infra-operator-controller-manager-57548d458d-sns94" (UID: "baf8a76b-04b8-45d7-83b8-49ab823f2af1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:27 crc kubenswrapper[4710]: I1128 17:15:27.816540 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.816710 4710 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:27 crc kubenswrapper[4710]: E1128 17:15:27.817332 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert podName:ee89a2e2-f64c-4310-a271-8d4e7043279a nodeName:}" failed. No retries permitted until 2025-11-28 17:15:31.817310822 +0000 UTC m=+1021.075610867 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" (UID: "ee89a2e2-f64c-4310-a271-8d4e7043279a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:28 crc kubenswrapper[4710]: E1128 17:15:28.134966 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.89:5001/openstack-k8s-operators/telemetry-operator:bf35154a77d3f7d42763b9d6bf295684481cdc52\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" podUID="5c695701-bc1a-4210-87ca-9ee354e664bc" Nov 28 17:15:28 crc kubenswrapper[4710]: I1128 17:15:28.226446 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:28 crc kubenswrapper[4710]: I1128 17:15:28.226575 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:28 crc kubenswrapper[4710]: E1128 17:15:28.227693 4710 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:15:28 crc kubenswrapper[4710]: E1128 17:15:28.227747 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:32.227730031 +0000 UTC m=+1021.486030076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "webhook-server-cert" not found Nov 28 17:15:28 crc kubenswrapper[4710]: E1128 17:15:28.229926 4710 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:15:28 crc kubenswrapper[4710]: E1128 17:15:28.230105 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:32.230045265 +0000 UTC m=+1021.488345370 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "metrics-server-cert" not found Nov 28 17:15:31 crc kubenswrapper[4710]: I1128 17:15:31.494066 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:31 crc kubenswrapper[4710]: E1128 17:15:31.494288 4710 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:31 crc kubenswrapper[4710]: E1128 17:15:31.494532 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert podName:baf8a76b-04b8-45d7-83b8-49ab823f2af1 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:39.494510085 +0000 UTC m=+1028.752810180 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert") pod "infra-operator-controller-manager-57548d458d-sns94" (UID: "baf8a76b-04b8-45d7-83b8-49ab823f2af1") : secret "infra-operator-webhook-server-cert" not found Nov 28 17:15:31 crc kubenswrapper[4710]: I1128 17:15:31.899887 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:31 crc kubenswrapper[4710]: E1128 17:15:31.900054 4710 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:31 crc kubenswrapper[4710]: E1128 17:15:31.900113 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert podName:ee89a2e2-f64c-4310-a271-8d4e7043279a nodeName:}" failed. No retries permitted until 2025-11-28 17:15:39.900098861 +0000 UTC m=+1029.158398906 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert") pod "openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" (UID: "ee89a2e2-f64c-4310-a271-8d4e7043279a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 28 17:15:32 crc kubenswrapper[4710]: I1128 17:15:32.305471 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:32 crc kubenswrapper[4710]: E1128 17:15:32.305639 4710 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 28 17:15:32 crc kubenswrapper[4710]: I1128 17:15:32.305904 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:32 crc kubenswrapper[4710]: E1128 17:15:32.305930 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:40.305911013 +0000 UTC m=+1029.564211048 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "metrics-server-cert" not found Nov 28 17:15:32 crc kubenswrapper[4710]: E1128 17:15:32.305961 4710 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:15:32 crc kubenswrapper[4710]: E1128 17:15:32.305991 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:40.305983496 +0000 UTC m=+1029.564283541 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "webhook-server-cert" not found Nov 28 17:15:39 crc kubenswrapper[4710]: I1128 17:15:39.522645 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:39 crc kubenswrapper[4710]: I1128 17:15:39.530884 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/baf8a76b-04b8-45d7-83b8-49ab823f2af1-cert\") pod \"infra-operator-controller-manager-57548d458d-sns94\" (UID: \"baf8a76b-04b8-45d7-83b8-49ab823f2af1\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:39 crc kubenswrapper[4710]: I1128 17:15:39.774537 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-qr6m8" Nov 28 17:15:39 crc kubenswrapper[4710]: I1128 17:15:39.782437 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:15:39 crc kubenswrapper[4710]: I1128 17:15:39.929789 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:39 crc kubenswrapper[4710]: I1128 17:15:39.937340 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ee89a2e2-f64c-4310-a271-8d4e7043279a-cert\") pod \"openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx\" (UID: \"ee89a2e2-f64c-4310-a271-8d4e7043279a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:40 crc kubenswrapper[4710]: I1128 17:15:40.197625 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jjlck" Nov 28 17:15:40 crc kubenswrapper[4710]: I1128 17:15:40.206715 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:15:40 crc kubenswrapper[4710]: I1128 17:15:40.336404 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:40 crc kubenswrapper[4710]: I1128 17:15:40.336569 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:40 crc kubenswrapper[4710]: E1128 17:15:40.336659 4710 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 28 17:15:40 crc kubenswrapper[4710]: E1128 17:15:40.336739 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs podName:61cb335c-2597-42e6-aa4c-410d8881b903 nodeName:}" failed. No retries permitted until 2025-11-28 17:15:56.336720923 +0000 UTC m=+1045.595020968 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs") pod "openstack-operator-controller-manager-668879d68f-pd88h" (UID: "61cb335c-2597-42e6-aa4c-410d8881b903") : secret "webhook-server-cert" not found Nov 28 17:15:40 crc kubenswrapper[4710]: I1128 17:15:40.345499 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-metrics-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:43 crc kubenswrapper[4710]: E1128 17:15:43.401684 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530" Nov 28 17:15:43 crc kubenswrapper[4710]: E1128 17:15:43.402529 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:0f523b7e2fa9e86fef986acf07d0c42d5658c475d565f11eaea926ebffcb6530,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vt6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-tkzbw_openstack-operators(a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:15:43 crc kubenswrapper[4710]: E1128 17:15:43.843736 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Nov 28 17:15:43 crc kubenswrapper[4710]: E1128 17:15:43.843995 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4zht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-867v6_openstack-operators(b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:15:45 crc kubenswrapper[4710]: I1128 17:15:45.091231 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx"] Nov 28 17:15:45 crc kubenswrapper[4710]: W1128 17:15:45.097121 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaf8a76b_04b8_45d7_83b8_49ab823f2af1.slice/crio-c9c7f8a33f6e36a665367fa195a16d18b1fc6155815dc006c6418b8f331f6039 WatchSource:0}: Error finding container c9c7f8a33f6e36a665367fa195a16d18b1fc6155815dc006c6418b8f331f6039: Status 404 returned error can't find the container with id c9c7f8a33f6e36a665367fa195a16d18b1fc6155815dc006c6418b8f331f6039 Nov 28 17:15:45 crc kubenswrapper[4710]: W1128 17:15:45.099899 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee89a2e2_f64c_4310_a271_8d4e7043279a.slice/crio-44aa0712886edee050fe668e4014490f341f98f356adf948ab5239570eed632e WatchSource:0}: Error finding container 44aa0712886edee050fe668e4014490f341f98f356adf948ab5239570eed632e: Status 404 returned error can't find the container with id 44aa0712886edee050fe668e4014490f341f98f356adf948ab5239570eed632e Nov 28 17:15:45 crc kubenswrapper[4710]: I1128 17:15:45.101320 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-sns94"] Nov 28 17:15:45 crc kubenswrapper[4710]: I1128 17:15:45.293383 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" event={"ID":"baf8a76b-04b8-45d7-83b8-49ab823f2af1","Type":"ContainerStarted","Data":"c9c7f8a33f6e36a665367fa195a16d18b1fc6155815dc006c6418b8f331f6039"} Nov 28 17:15:45 crc kubenswrapper[4710]: I1128 17:15:45.294422 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" event={"ID":"ee89a2e2-f64c-4310-a271-8d4e7043279a","Type":"ContainerStarted","Data":"44aa0712886edee050fe668e4014490f341f98f356adf948ab5239570eed632e"} Nov 28 17:15:50 crc kubenswrapper[4710]: I1128 17:15:50.333615 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" event={"ID":"5a6d5b4b-1460-41a8-a248-e814e32fb672","Type":"ContainerStarted","Data":"ff5ba56cc8680c12479a08547f66eeb5efd1c49337877e0fafe6e6a0818022ba"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.379681 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" event={"ID":"377d6817-3f41-4bba-9078-fa77dcdb9591","Type":"ContainerStarted","Data":"bb16358d39fdb77d0d7d34327f7d5618967c436a6c3f2a11270cb3b294cc3daa"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.386346 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" event={"ID":"98f1d4c3-68b2-42b6-bbfa-e8aaec209764","Type":"ContainerStarted","Data":"c72ddb496e3e5276d9866af868bd43605127be5f7610b76ac8a5f6773d6f2da0"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.388870 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" event={"ID":"3c2144e6-7894-4e16-9952-f4a4d848aa55","Type":"ContainerStarted","Data":"cf79db73c1206574bac8646d952ea6bb01c9470f38b569ee44e0a19aff1ce71a"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.390374 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" event={"ID":"a66ff16d-f7e8-42d1-9b40-e992fd3aabb2","Type":"ContainerStarted","Data":"7d273d69b4ca940b02201c3d8fb2992f5b73697f7ae33a063fe90a1f7a62b18f"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.391617 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" event={"ID":"92a0ce9b-b234-4954-bf20-890fa1a6785d","Type":"ContainerStarted","Data":"218ad9c0741c29e696c23e599378e8e4b8efa70d75a27729bfd9d11264f924e8"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.397074 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" event={"ID":"419588b7-987b-44f5-81fd-76451ba0eb2d","Type":"ContainerStarted","Data":"73bdbe976f0ac0ba52f7a8f6d9603b7bf9ae0305d91b0ce2dd655d569cc70740"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.399130 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" event={"ID":"a70892da-8396-4018-89e0-f25e7221e674","Type":"ContainerStarted","Data":"540f253fea953dde34badcc302104e0c8e6bf1f12b967f0bcfb0329a5150addf"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.552126 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" event={"ID":"6ebfa717-92f8-4563-9456-644d1c107d6b","Type":"ContainerStarted","Data":"e3149c3b3ec299d627e5306a6607f8828f91327ce8575f8826d7f914880bb0cc"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.565124 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" event={"ID":"448f2efe-7d9c-476e-af1c-3ebf62e2b6cb","Type":"ContainerStarted","Data":"4fe6fe9814a24f42bdadf68e1224f64853c7549c93193d5a9d8d7ee8305e1d30"} Nov 28 17:15:51 crc kubenswrapper[4710]: I1128 17:15:51.569480 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" event={"ID":"bafb8518-b399-4fe2-9577-8bb606450832","Type":"ContainerStarted","Data":"a4d34991beb455e62d82cc5b301693a22bae6b73279d20c5879233faf350cc82"} Nov 28 17:15:56 crc kubenswrapper[4710]: I1128 17:15:56.348531 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:56 crc kubenswrapper[4710]: I1128 17:15:56.360730 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/61cb335c-2597-42e6-aa4c-410d8881b903-webhook-certs\") pod \"openstack-operator-controller-manager-668879d68f-pd88h\" (UID: \"61cb335c-2597-42e6-aa4c-410d8881b903\") " pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:56 crc kubenswrapper[4710]: I1128 17:15:56.610971 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b27jc" Nov 28 17:15:56 crc kubenswrapper[4710]: I1128 17:15:56.620163 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:15:57 crc kubenswrapper[4710]: E1128 17:15:57.572907 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3" Nov 28 17:15:57 crc kubenswrapper[4710]: E1128 17:15:57.573454 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:986861e5a0a9954f63581d9d55a30f8057883cefea489415d76257774526eea3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9tqzw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-546d4bdf48-6h9mk_openstack-operators(81c851e8-e354-40c6-84cf-264f22be561f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:16:02 crc kubenswrapper[4710]: E1128 17:16:02.323453 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Nov 28 17:16:02 crc kubenswrapper[4710]: E1128 17:16:02.323939 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dv4km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-22spv_openstack-operators(b3e15c80-d7b6-4d62-9eff-011dee6d7b6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:16:03 crc kubenswrapper[4710]: E1128 17:16:03.172253 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Nov 28 17:16:03 crc kubenswrapper[4710]: E1128 17:16:03.172711 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s5k7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-wd77l_openstack-operators(faacb861-2d5b-4629-8c6b-ae9427266b7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:16:04 crc kubenswrapper[4710]: E1128 17:16:04.601892 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Nov 28 17:16:04 crc kubenswrapper[4710]: E1128 17:16:04.602089 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-45gjt_openstack-operators(5755fe75-0e8f-4b17-ab96-1efe5ace8c0f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:16:05 crc kubenswrapper[4710]: E1128 17:16:05.328893 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 17:16:05 crc kubenswrapper[4710]: E1128 17:16:05.329056 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2vt6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-6c548fd776-tkzbw_openstack-operators(a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 28 17:16:05 crc kubenswrapper[4710]: E1128 17:16:05.331304 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" podUID="a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf" Nov 28 17:16:05 crc kubenswrapper[4710]: E1128 17:16:05.376400 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621" Nov 28 17:16:05 crc kubenswrapper[4710]: E1128 17:16:05.376681 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9v6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-769dc69bc-rxp9t_openstack-operators(e31192ae-8aa1-4376-a40b-4bd8e0e45928): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:16:07 crc kubenswrapper[4710]: E1128 17:16:07.313597 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 28 17:16:07 crc kubenswrapper[4710]: E1128 17:16:07.314093 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4zht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-867v6_openstack-operators(b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 28 17:16:07 crc kubenswrapper[4710]: E1128 17:16:07.315285 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"]" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" podUID="b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c" Nov 28 17:16:08 crc kubenswrapper[4710]: I1128 17:16:08.575980 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h"] Nov 28 17:16:08 crc kubenswrapper[4710]: W1128 17:16:08.855715 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61cb335c_2597_42e6_aa4c_410d8881b903.slice/crio-02fb1e85a22ca6dcce8d472c492df963a65daaf01cee04221d0bc1fe271b66d9 WatchSource:0}: Error finding container 02fb1e85a22ca6dcce8d472c492df963a65daaf01cee04221d0bc1fe271b66d9: Status 404 returned error can't find the container with id 02fb1e85a22ca6dcce8d472c492df963a65daaf01cee04221d0bc1fe271b66d9 Nov 28 17:16:09 crc kubenswrapper[4710]: I1128 17:16:09.765244 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" event={"ID":"61cb335c-2597-42e6-aa4c-410d8881b903","Type":"ContainerStarted","Data":"02fb1e85a22ca6dcce8d472c492df963a65daaf01cee04221d0bc1fe271b66d9"} Nov 28 17:16:13 crc kubenswrapper[4710]: I1128 17:16:13.808387 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" event={"ID":"61cb335c-2597-42e6-aa4c-410d8881b903","Type":"ContainerStarted","Data":"8424553d6cba8785af819f2a194039e33d5c117350f385c0a3d9352936bdf87c"} Nov 28 17:16:13 crc kubenswrapper[4710]: I1128 17:16:13.809617 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:16:13 crc kubenswrapper[4710]: I1128 17:16:13.813005 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb" event={"ID":"e557836a-92e3-47e0-8a29-e02ab29a9aea","Type":"ContainerStarted","Data":"db30fe74440ceec080c6a5afe9290b84efe9920dbb099eb49bd95b2695033a62"} Nov 28 17:16:13 crc kubenswrapper[4710]: I1128 17:16:13.851071 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" podStartSLOduration=49.85104736 podStartE2EDuration="49.85104736s" podCreationTimestamp="2025-11-28 17:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:16:13.843968565 +0000 UTC m=+1063.102268610" watchObservedRunningTime="2025-11-28 17:16:13.85104736 +0000 UTC m=+1063.109347405" Nov 28 17:16:13 crc kubenswrapper[4710]: I1128 17:16:13.865782 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z7ndb" podStartSLOduration=10.326002916 podStartE2EDuration="49.865722336s" podCreationTimestamp="2025-11-28 17:15:24 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.789063564 +0000 UTC m=+1015.047363619" lastFinishedPulling="2025-11-28 17:16:05.328782984 +0000 UTC m=+1054.587083039" observedRunningTime="2025-11-28 17:16:13.863790204 +0000 UTC m=+1063.122090249" watchObservedRunningTime="2025-11-28 17:16:13.865722336 +0000 UTC m=+1063.124022381" Nov 28 17:16:19 crc kubenswrapper[4710]: I1128 17:16:19.859213 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" event={"ID":"baf8a76b-04b8-45d7-83b8-49ab823f2af1","Type":"ContainerStarted","Data":"bb40634f33361a463ab01a33970e4d9a6b6cb4a5aacb0c7b2877191da4d2ce0a"} Nov 28 17:16:20 crc kubenswrapper[4710]: I1128 17:16:20.868181 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" event={"ID":"5c695701-bc1a-4210-87ca-9ee354e664bc","Type":"ContainerStarted","Data":"043ef032afc9dfcb1f22304261b31dec278e3c67a24e797f3e0b26990d60bc29"} Nov 28 17:16:20 crc kubenswrapper[4710]: I1128 17:16:20.869748 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" event={"ID":"a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf","Type":"ContainerStarted","Data":"b0345bbe9af8ddd8510d7fd0bcc835695f3f1e4c179b33fc421206891820de04"} Nov 28 17:16:20 crc kubenswrapper[4710]: I1128 17:16:20.871391 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" event={"ID":"ee89a2e2-f64c-4310-a271-8d4e7043279a","Type":"ContainerStarted","Data":"308fa3d0a0ea1c79e12d3e25a2f7c86782fa7c18c026b601062acd5771941203"} Nov 28 17:16:21 crc kubenswrapper[4710]: E1128 17:16:21.175385 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" podUID="5755fe75-0e8f-4b17-ab96-1efe5ace8c0f" Nov 28 17:16:21 crc kubenswrapper[4710]: E1128 17:16:21.687694 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" podUID="b3e15c80-d7b6-4d62-9eff-011dee6d7b6e" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.894748 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" event={"ID":"a66ff16d-f7e8-42d1-9b40-e992fd3aabb2","Type":"ContainerStarted","Data":"e862972403f14a2a4c65b9d39d2908106fb53e95200453eea622a720ff723892"} Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.894888 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.896395 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" event={"ID":"b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c","Type":"ContainerStarted","Data":"47c4d0051d948a75858e6dea82cbdeb583c60feaa02a7e9dccefc22cf7017efc"} Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.898209 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" event={"ID":"92a0ce9b-b234-4954-bf20-890fa1a6785d","Type":"ContainerStarted","Data":"cbbaba4c4ee3424de89d1535d1ae0ddd1517a23f6bb9b7887e4088036d869af1"} Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.899325 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.904040 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" event={"ID":"5c695701-bc1a-4210-87ca-9ee354e664bc","Type":"ContainerStarted","Data":"183a51ae3446ea6c8ecbe0113b351bc958696be3c1a0bd2361c6f82aee194f89"} Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.904425 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.908453 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" event={"ID":"377d6817-3f41-4bba-9078-fa77dcdb9591","Type":"ContainerStarted","Data":"c6d595b12d08a1b9846c5d56819039dcce100ec2dd3fc6ac952155bbf11bf679"} Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.908631 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.910472 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" event={"ID":"5755fe75-0e8f-4b17-ab96-1efe5ace8c0f","Type":"ContainerStarted","Data":"7789145c3de9093f3456620ac0d5896c50d547f600b2ad8c70da41cd3ecbc09c"} Nov 28 17:16:21 crc kubenswrapper[4710]: E1128 17:16:21.911858 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" podUID="5755fe75-0e8f-4b17-ab96-1efe5ace8c0f" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.912459 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" event={"ID":"b3e15c80-d7b6-4d62-9eff-011dee6d7b6e","Type":"ContainerStarted","Data":"7d7760d137a5b7851801224b8c470719baf7a78f57722acdab4640f271869409"} Nov 28 17:16:21 crc kubenswrapper[4710]: E1128 17:16:21.913382 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" podUID="b3e15c80-d7b6-4d62-9eff-011dee6d7b6e" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.916714 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" event={"ID":"98f1d4c3-68b2-42b6-bbfa-e8aaec209764","Type":"ContainerStarted","Data":"4f23cc277117309396d24c8f4c2bd527700a0d8e00123d2b4c3f5a425b993828"} Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.917046 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.933658 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" podStartSLOduration=5.141459525 podStartE2EDuration="58.933640944s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.332118524 +0000 UTC m=+1014.590418569" lastFinishedPulling="2025-11-28 17:16:19.124299943 +0000 UTC m=+1068.382599988" observedRunningTime="2025-11-28 17:16:21.92501587 +0000 UTC m=+1071.183315915" watchObservedRunningTime="2025-11-28 17:16:21.933640944 +0000 UTC m=+1071.191940989" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.941295 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6546668bfd-bcg9d" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.941795 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" Nov 28 17:16:21 crc kubenswrapper[4710]: I1128 17:16:21.942958 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.018964 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-hsntq" podStartSLOduration=3.7221820279999998 podStartE2EDuration="59.018939816s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.281845403 +0000 UTC m=+1014.540145448" lastFinishedPulling="2025-11-28 17:16:20.578603191 +0000 UTC m=+1069.836903236" observedRunningTime="2025-11-28 17:16:22.006901085 +0000 UTC m=+1071.265201140" watchObservedRunningTime="2025-11-28 17:16:22.018939816 +0000 UTC m=+1071.277239861" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.024289 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.053937 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-668d9c48b9-xxmrh" podStartSLOduration=5.069092179 podStartE2EDuration="59.053916105s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.155773613 +0000 UTC m=+1014.414073658" lastFinishedPulling="2025-11-28 17:16:19.140597539 +0000 UTC m=+1068.398897584" observedRunningTime="2025-11-28 17:16:22.044723733 +0000 UTC m=+1071.303023788" watchObservedRunningTime="2025-11-28 17:16:22.053916105 +0000 UTC m=+1071.312216160" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.077580 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" podStartSLOduration=13.283382637 podStartE2EDuration="59.077559504s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:26.011103882 +0000 UTC m=+1015.269403927" lastFinishedPulling="2025-11-28 17:16:11.805280749 +0000 UTC m=+1061.063580794" observedRunningTime="2025-11-28 17:16:22.076700867 +0000 UTC m=+1071.335000912" watchObservedRunningTime="2025-11-28 17:16:22.077559504 +0000 UTC m=+1071.335859549" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.102733 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-s7xmc" podStartSLOduration=5.184411875 podStartE2EDuration="59.102719191s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.332176336 +0000 UTC m=+1014.590476381" lastFinishedPulling="2025-11-28 17:16:19.250483652 +0000 UTC m=+1068.508783697" observedRunningTime="2025-11-28 17:16:22.09699833 +0000 UTC m=+1071.355298375" watchObservedRunningTime="2025-11-28 17:16:22.102719191 +0000 UTC m=+1071.361019236" Nov 28 17:16:22 crc kubenswrapper[4710]: E1128 17:16:22.823122 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" podUID="81c851e8-e354-40c6-84cf-264f22be561f" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.957859 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" event={"ID":"b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c","Type":"ContainerStarted","Data":"a73b63063d8884fcbcb1fd2472101ad67f25aace85e1cc3239fd1d463139f3cc"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.960119 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" event={"ID":"a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf","Type":"ContainerStarted","Data":"4439d693c5ecb64b3d17eea03353aa4addb344e2419cf2c8f6d7ce3737e0e336"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.962541 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" event={"ID":"3c2144e6-7894-4e16-9952-f4a4d848aa55","Type":"ContainerStarted","Data":"ccabc712f0d697b2fde563832c132b27e3b0ea06f9007feb7c7fb62570aeb322"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.968029 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" event={"ID":"81c851e8-e354-40c6-84cf-264f22be561f","Type":"ContainerStarted","Data":"9e050d48fb15812a25d96063bf175277bf85a244a70f0f8d1f8ed89fbf44fa2f"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.970701 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" event={"ID":"419588b7-987b-44f5-81fd-76451ba0eb2d","Type":"ContainerStarted","Data":"b529334a57fe1fb998679900f6ced9bc82e2cbea1b1dd107efe99b1192f7dab9"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.973495 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" event={"ID":"bafb8518-b399-4fe2-9577-8bb606450832","Type":"ContainerStarted","Data":"d390047f7188a93027495a8262620f58d0bab490cb9a7225f47db47080103263"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.976027 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" event={"ID":"6ebfa717-92f8-4563-9456-644d1c107d6b","Type":"ContainerStarted","Data":"4fda3bf282aa39ab158027590666f47b474843352ead7754b9770737659ad2d8"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.977974 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" event={"ID":"448f2efe-7d9c-476e-af1c-3ebf62e2b6cb","Type":"ContainerStarted","Data":"2d8c7bc2b20206cbdded25a1e3e9c5a28e2fe332455552da43f06a388df921f1"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.978213 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.980204 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" event={"ID":"baf8a76b-04b8-45d7-83b8-49ab823f2af1","Type":"ContainerStarted","Data":"d8a95f902f95ef2389fbc352f27e39ceaf6c4ade1d2768be9827e266b28223ae"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.980572 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.981102 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.991897 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" event={"ID":"ee89a2e2-f64c-4310-a271-8d4e7043279a","Type":"ContainerStarted","Data":"fd109dfc35d033a29b12d715a8ce6b33c5451bdee82e1e1ef63402c09d29f69e"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.994598 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" event={"ID":"5a6d5b4b-1460-41a8-a248-e814e32fb672","Type":"ContainerStarted","Data":"de3faa376631632fe6feac68f07a0f77bce422a4185883290a73f3f5ea664c1a"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.996848 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" podStartSLOduration=4.981698313 podStartE2EDuration="59.996831597s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.54812317 +0000 UTC m=+1014.806423215" lastFinishedPulling="2025-11-28 17:16:20.563256444 +0000 UTC m=+1069.821556499" observedRunningTime="2025-11-28 17:16:22.990905319 +0000 UTC m=+1072.249205364" watchObservedRunningTime="2025-11-28 17:16:22.996831597 +0000 UTC m=+1072.255131632" Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.997372 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" event={"ID":"a70892da-8396-4018-89e0-f25e7221e674","Type":"ContainerStarted","Data":"e9f66f909d87ac28fbb7fb5644c302e5ec1240d45dd5cfdf557f2ccdb7f5c9e9"} Nov 28 17:16:22 crc kubenswrapper[4710]: I1128 17:16:22.998692 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" Nov 28 17:16:23 crc kubenswrapper[4710]: I1128 17:16:23.000047 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" event={"ID":"e31192ae-8aa1-4376-a40b-4bd8e0e45928","Type":"ContainerStarted","Data":"d5c88580efc0ffc49154cb27a5784ce15380e63b91b8990dfd05c0b4c3d16f88"} Nov 28 17:16:23 crc kubenswrapper[4710]: I1128 17:16:23.002644 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" event={"ID":"faacb861-2d5b-4629-8c6b-ae9427266b7b","Type":"ContainerStarted","Data":"1978374a90409d685446c70dee82b9e8bcb3d993cb0494e7679fe7ab11e74ec3"} Nov 28 17:16:23 crc kubenswrapper[4710]: I1128 17:16:23.004353 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" Nov 28 17:16:23 crc kubenswrapper[4710]: I1128 17:16:23.041068 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" podStartSLOduration=33.33845072 podStartE2EDuration="1m0.041051528s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:45.100665066 +0000 UTC m=+1034.358965111" lastFinishedPulling="2025-11-28 17:16:11.803265864 +0000 UTC m=+1061.061565919" observedRunningTime="2025-11-28 17:16:23.04080169 +0000 UTC m=+1072.299101725" watchObservedRunningTime="2025-11-28 17:16:23.041051528 +0000 UTC m=+1072.299351573" Nov 28 17:16:23 crc kubenswrapper[4710]: E1128 17:16:23.042882 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" podUID="e31192ae-8aa1-4376-a40b-4bd8e0e45928" Nov 28 17:16:23 crc kubenswrapper[4710]: E1128 17:16:23.043265 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" podUID="faacb861-2d5b-4629-8c6b-ae9427266b7b" Nov 28 17:16:23 crc kubenswrapper[4710]: I1128 17:16:23.093604 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-sbhc4" podStartSLOduration=4.866500619 podStartE2EDuration="1m0.093585422s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.340651124 +0000 UTC m=+1014.598951169" lastFinishedPulling="2025-11-28 17:16:20.567735927 +0000 UTC m=+1069.826035972" observedRunningTime="2025-11-28 17:16:23.089923547 +0000 UTC m=+1072.348223592" watchObservedRunningTime="2025-11-28 17:16:23.093585422 +0000 UTC m=+1072.351885467" Nov 28 17:16:23 crc kubenswrapper[4710]: I1128 17:16:23.137824 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-7hsvg" podStartSLOduration=4.672437261 podStartE2EDuration="1m0.137805374s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.102222099 +0000 UTC m=+1014.360522134" lastFinishedPulling="2025-11-28 17:16:20.567590202 +0000 UTC m=+1069.825890247" observedRunningTime="2025-11-28 17:16:23.136388249 +0000 UTC m=+1072.394688304" watchObservedRunningTime="2025-11-28 17:16:23.137805374 +0000 UTC m=+1072.396105429" Nov 28 17:16:23 crc kubenswrapper[4710]: I1128 17:16:23.145430 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" podStartSLOduration=6.986439204 podStartE2EDuration="1m0.145407005s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.980469172 +0000 UTC m=+1015.238769217" lastFinishedPulling="2025-11-28 17:16:19.139436973 +0000 UTC m=+1068.397737018" observedRunningTime="2025-11-28 17:16:23.121845308 +0000 UTC m=+1072.380145363" watchObservedRunningTime="2025-11-28 17:16:23.145407005 +0000 UTC m=+1072.403707050" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.012015 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" event={"ID":"81c851e8-e354-40c6-84cf-264f22be561f","Type":"ContainerStarted","Data":"8db4287b1f257a65587708cc0e483d142576ccd7305e8731313d34e9d918e0a8"} Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.012687 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.013854 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.014107 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" Nov 28 17:16:24 crc kubenswrapper[4710]: E1128 17:16:24.014655 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:9aa8c03633e4b934c57868c1660acf47e7d386ac86bcb344df262c9ad76b8621\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" podUID="e31192ae-8aa1-4376-a40b-4bd8e0e45928" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.016340 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.016413 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-hznck" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.016728 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-2c9kf" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.019161 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-sns94" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.032007 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-q8vpd" podStartSLOduration=5.3804349160000005 podStartE2EDuration="1m1.031988011s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:24.911605336 +0000 UTC m=+1014.169905381" lastFinishedPulling="2025-11-28 17:16:20.563158431 +0000 UTC m=+1069.821458476" observedRunningTime="2025-11-28 17:16:24.031588238 +0000 UTC m=+1073.289888283" watchObservedRunningTime="2025-11-28 17:16:24.031988011 +0000 UTC m=+1073.290288066" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.058189 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" podStartSLOduration=7.464947698 podStartE2EDuration="1m1.058165901s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.548146901 +0000 UTC m=+1014.806446946" lastFinishedPulling="2025-11-28 17:16:19.141365104 +0000 UTC m=+1068.399665149" observedRunningTime="2025-11-28 17:16:24.052697948 +0000 UTC m=+1073.310997993" watchObservedRunningTime="2025-11-28 17:16:24.058165901 +0000 UTC m=+1073.316465946" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.105587 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" podStartSLOduration=2.842650683 podStartE2EDuration="1m1.105567563s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.310962914 +0000 UTC m=+1014.569262959" lastFinishedPulling="2025-11-28 17:16:23.573879784 +0000 UTC m=+1072.832179839" observedRunningTime="2025-11-28 17:16:24.104263421 +0000 UTC m=+1073.362563466" watchObservedRunningTime="2025-11-28 17:16:24.105567563 +0000 UTC m=+1073.363867608" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.177218 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" podStartSLOduration=6.141151955 podStartE2EDuration="1m1.177192163s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.534842339 +0000 UTC m=+1014.793142384" lastFinishedPulling="2025-11-28 17:16:20.570882547 +0000 UTC m=+1069.829182592" observedRunningTime="2025-11-28 17:16:24.171138942 +0000 UTC m=+1073.429438997" watchObservedRunningTime="2025-11-28 17:16:24.177192163 +0000 UTC m=+1073.435492218" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.216432 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.222580 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" podStartSLOduration=35.846375877 podStartE2EDuration="1m1.2225575s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:45.102231117 +0000 UTC m=+1034.360531162" lastFinishedPulling="2025-11-28 17:16:10.47841275 +0000 UTC m=+1059.736712785" observedRunningTime="2025-11-28 17:16:24.21624759 +0000 UTC m=+1073.474547635" watchObservedRunningTime="2025-11-28 17:16:24.2225575 +0000 UTC m=+1073.480857545" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.257225 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.260738 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" podStartSLOduration=5.808425861 podStartE2EDuration="1m1.260723281s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.110745968 +0000 UTC m=+1014.369046013" lastFinishedPulling="2025-11-28 17:16:20.563043388 +0000 UTC m=+1069.821343433" observedRunningTime="2025-11-28 17:16:24.259431029 +0000 UTC m=+1073.517731084" watchObservedRunningTime="2025-11-28 17:16:24.260723281 +0000 UTC m=+1073.519023316" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.261806 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-bjrnl" Nov 28 17:16:24 crc kubenswrapper[4710]: I1128 17:16:24.278903 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" podStartSLOduration=14.758789115999999 podStartE2EDuration="1m1.278871595s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.284959872 +0000 UTC m=+1014.543259917" lastFinishedPulling="2025-11-28 17:16:11.805042341 +0000 UTC m=+1061.063342396" observedRunningTime="2025-11-28 17:16:24.27682069 +0000 UTC m=+1073.535120735" watchObservedRunningTime="2025-11-28 17:16:24.278871595 +0000 UTC m=+1073.537171640" Nov 28 17:16:25 crc kubenswrapper[4710]: I1128 17:16:25.022046 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" event={"ID":"faacb861-2d5b-4629-8c6b-ae9427266b7b","Type":"ContainerStarted","Data":"7f38b224260bdf98349d3477fab23f84ae3ffa2cb2b18c3a685ccf83822f7c8f"} Nov 28 17:16:25 crc kubenswrapper[4710]: I1128 17:16:25.025019 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" Nov 28 17:16:25 crc kubenswrapper[4710]: I1128 17:16:25.042861 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" podStartSLOduration=3.007348763 podStartE2EDuration="1m2.042841056s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.548417299 +0000 UTC m=+1014.806717344" lastFinishedPulling="2025-11-28 17:16:24.583909592 +0000 UTC m=+1073.842209637" observedRunningTime="2025-11-28 17:16:25.040683158 +0000 UTC m=+1074.298983203" watchObservedRunningTime="2025-11-28 17:16:25.042841056 +0000 UTC m=+1074.301141101" Nov 28 17:16:26 crc kubenswrapper[4710]: I1128 17:16:26.625927 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-668879d68f-pd88h" Nov 28 17:16:30 crc kubenswrapper[4710]: I1128 17:16:30.207373 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:16:30 crc kubenswrapper[4710]: I1128 17:16:30.213616 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx" Nov 28 17:16:33 crc kubenswrapper[4710]: I1128 17:16:33.864940 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" Nov 28 17:16:33 crc kubenswrapper[4710]: I1128 17:16:33.867254 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-2gpds" Nov 28 17:16:33 crc kubenswrapper[4710]: I1128 17:16:33.912574 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" Nov 28 17:16:33 crc kubenswrapper[4710]: I1128 17:16:33.916306 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" Nov 28 17:16:33 crc kubenswrapper[4710]: I1128 17:16:33.926199 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-546d4bdf48-6h9mk" Nov 28 17:16:34 crc kubenswrapper[4710]: I1128 17:16:34.218741 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-867v6" Nov 28 17:16:34 crc kubenswrapper[4710]: I1128 17:16:34.236872 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" Nov 28 17:16:34 crc kubenswrapper[4710]: I1128 17:16:34.239210 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-wd77l" Nov 28 17:16:34 crc kubenswrapper[4710]: I1128 17:16:34.426088 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6b5d64d475-6p56z" Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.152269 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" event={"ID":"5755fe75-0e8f-4b17-ab96-1efe5ace8c0f","Type":"ContainerStarted","Data":"faf899dcd54ba9c98174f54f0ff366c345868342e073409f52e35030650aca6a"} Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.152998 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.154248 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" event={"ID":"b3e15c80-d7b6-4d62-9eff-011dee6d7b6e","Type":"ContainerStarted","Data":"0a1d9ef3040c4f0a5e320adda2733dc0eef22a4b022c52eb673a454047da21b6"} Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.154440 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.155876 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" event={"ID":"e31192ae-8aa1-4376-a40b-4bd8e0e45928","Type":"ContainerStarted","Data":"0ed0a8a5dda35d1d877e1e68619717fb7636fcfaaeef782f02cdb4ea135a5041"} Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.156468 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.176070 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" podStartSLOduration=3.602186553 podStartE2EDuration="1m17.176056716s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.600388464 +0000 UTC m=+1014.858688509" lastFinishedPulling="2025-11-28 17:16:39.174258607 +0000 UTC m=+1088.432558672" observedRunningTime="2025-11-28 17:16:40.175157257 +0000 UTC m=+1089.433457312" watchObservedRunningTime="2025-11-28 17:16:40.176056716 +0000 UTC m=+1089.434356761" Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.195859 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" podStartSLOduration=3.402495545 podStartE2EDuration="1m17.195839613s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.808614384 +0000 UTC m=+1015.066914429" lastFinishedPulling="2025-11-28 17:16:39.601958452 +0000 UTC m=+1088.860258497" observedRunningTime="2025-11-28 17:16:40.188359495 +0000 UTC m=+1089.446659540" watchObservedRunningTime="2025-11-28 17:16:40.195839613 +0000 UTC m=+1089.454139668" Nov 28 17:16:40 crc kubenswrapper[4710]: I1128 17:16:40.330534 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" podStartSLOduration=3.516745395 podStartE2EDuration="1m17.33051581s" podCreationTimestamp="2025-11-28 17:15:23 +0000 UTC" firstStartedPulling="2025-11-28 17:15:25.78923766 +0000 UTC m=+1015.047537705" lastFinishedPulling="2025-11-28 17:16:39.603008065 +0000 UTC m=+1088.861308120" observedRunningTime="2025-11-28 17:16:40.325258523 +0000 UTC m=+1089.583558578" watchObservedRunningTime="2025-11-28 17:16:40.33051581 +0000 UTC m=+1089.588815855" Nov 28 17:16:44 crc kubenswrapper[4710]: I1128 17:16:44.366921 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-45gjt" Nov 28 17:16:44 crc kubenswrapper[4710]: I1128 17:16:44.580194 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-22spv" Nov 28 17:16:44 crc kubenswrapper[4710]: I1128 17:16:44.676739 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-rxp9t" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.637705 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4wphk"] Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.640397 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.644503 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.644731 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.644962 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.645103 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-5p75z" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.662628 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4wphk"] Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.669719 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wjvrq"] Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.671289 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.674570 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.683921 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wjvrq"] Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.790819 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phzq\" (UniqueName: \"kubernetes.io/projected/13a49183-f314-409c-b446-e085d2f10139-kube-api-access-8phzq\") pod \"dnsmasq-dns-675f4bcbfc-4wphk\" (UID: \"13a49183-f314-409c-b446-e085d2f10139\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.790870 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.791087 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a49183-f314-409c-b446-e085d2f10139-config\") pod \"dnsmasq-dns-675f4bcbfc-4wphk\" (UID: \"13a49183-f314-409c-b446-e085d2f10139\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.791131 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/c4d9daf2-3e4e-4de0-98f2-644bffb38269-kube-api-access-mgfrv\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.791216 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-config\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.892744 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a49183-f314-409c-b446-e085d2f10139-config\") pod \"dnsmasq-dns-675f4bcbfc-4wphk\" (UID: \"13a49183-f314-409c-b446-e085d2f10139\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.892815 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/c4d9daf2-3e4e-4de0-98f2-644bffb38269-kube-api-access-mgfrv\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.892877 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-config\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.892947 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phzq\" (UniqueName: \"kubernetes.io/projected/13a49183-f314-409c-b446-e085d2f10139-kube-api-access-8phzq\") pod \"dnsmasq-dns-675f4bcbfc-4wphk\" (UID: \"13a49183-f314-409c-b446-e085d2f10139\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.892980 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.893662 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a49183-f314-409c-b446-e085d2f10139-config\") pod \"dnsmasq-dns-675f4bcbfc-4wphk\" (UID: \"13a49183-f314-409c-b446-e085d2f10139\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.893879 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.894706 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-config\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.912263 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/c4d9daf2-3e4e-4de0-98f2-644bffb38269-kube-api-access-mgfrv\") pod \"dnsmasq-dns-78dd6ddcc-wjvrq\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.923467 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phzq\" (UniqueName: \"kubernetes.io/projected/13a49183-f314-409c-b446-e085d2f10139-kube-api-access-8phzq\") pod \"dnsmasq-dns-675f4bcbfc-4wphk\" (UID: \"13a49183-f314-409c-b446-e085d2f10139\") " pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.973436 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:01 crc kubenswrapper[4710]: I1128 17:17:01.989439 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:02 crc kubenswrapper[4710]: I1128 17:17:02.451016 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4wphk"] Nov 28 17:17:02 crc kubenswrapper[4710]: W1128 17:17:02.458954 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13a49183_f314_409c_b446_e085d2f10139.slice/crio-32bd5af390b0413d153a71319bdfa387e4fb94c40297432fe5ff071ee06b3c53 WatchSource:0}: Error finding container 32bd5af390b0413d153a71319bdfa387e4fb94c40297432fe5ff071ee06b3c53: Status 404 returned error can't find the container with id 32bd5af390b0413d153a71319bdfa387e4fb94c40297432fe5ff071ee06b3c53 Nov 28 17:17:02 crc kubenswrapper[4710]: W1128 17:17:02.463536 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4d9daf2_3e4e_4de0_98f2_644bffb38269.slice/crio-eb119a3c74e1c5f8e6f973bc5db50fdc1229b0bc0ebe26e9855075952487bf82 WatchSource:0}: Error finding container eb119a3c74e1c5f8e6f973bc5db50fdc1229b0bc0ebe26e9855075952487bf82: Status 404 returned error can't find the container with id eb119a3c74e1c5f8e6f973bc5db50fdc1229b0bc0ebe26e9855075952487bf82 Nov 28 17:17:02 crc kubenswrapper[4710]: I1128 17:17:02.477084 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wjvrq"] Nov 28 17:17:03 crc kubenswrapper[4710]: I1128 17:17:03.362488 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" event={"ID":"c4d9daf2-3e4e-4de0-98f2-644bffb38269","Type":"ContainerStarted","Data":"eb119a3c74e1c5f8e6f973bc5db50fdc1229b0bc0ebe26e9855075952487bf82"} Nov 28 17:17:03 crc kubenswrapper[4710]: I1128 17:17:03.364839 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" event={"ID":"13a49183-f314-409c-b446-e085d2f10139","Type":"ContainerStarted","Data":"32bd5af390b0413d153a71319bdfa387e4fb94c40297432fe5ff071ee06b3c53"} Nov 28 17:17:04 crc kubenswrapper[4710]: I1128 17:17:04.966431 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4wphk"] Nov 28 17:17:04 crc kubenswrapper[4710]: I1128 17:17:04.991623 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2rr5n"] Nov 28 17:17:04 crc kubenswrapper[4710]: I1128 17:17:04.993807 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.020179 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2rr5n"] Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.158965 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-config\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.159056 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.159106 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ggcg\" (UniqueName: \"kubernetes.io/projected/cee07afe-bef5-4d3d-afc5-80c629129a25-kube-api-access-9ggcg\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.261034 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ggcg\" (UniqueName: \"kubernetes.io/projected/cee07afe-bef5-4d3d-afc5-80c629129a25-kube-api-access-9ggcg\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.261139 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-config\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.261195 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.263583 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.264225 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-config\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.292001 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ggcg\" (UniqueName: \"kubernetes.io/projected/cee07afe-bef5-4d3d-afc5-80c629129a25-kube-api-access-9ggcg\") pod \"dnsmasq-dns-666b6646f7-2rr5n\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.322866 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wjvrq"] Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.335029 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.352412 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvmwp"] Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.359439 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.374112 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvmwp"] Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.463890 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-config\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.463938 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.463990 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qvw\" (UniqueName: \"kubernetes.io/projected/735e6f86-ee65-44b8-b685-aa3cf331c533-kube-api-access-s5qvw\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.565608 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-config\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.567919 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.568015 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qvw\" (UniqueName: \"kubernetes.io/projected/735e6f86-ee65-44b8-b685-aa3cf331c533-kube-api-access-s5qvw\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.566899 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-config\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.569655 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.596808 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qvw\" (UniqueName: \"kubernetes.io/projected/735e6f86-ee65-44b8-b685-aa3cf331c533-kube-api-access-s5qvw\") pod \"dnsmasq-dns-57d769cc4f-jvmwp\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.784979 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:05 crc kubenswrapper[4710]: I1128 17:17:05.930813 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2rr5n"] Nov 28 17:17:05 crc kubenswrapper[4710]: W1128 17:17:05.938952 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcee07afe_bef5_4d3d_afc5_80c629129a25.slice/crio-59d86870b72cf489ed951148bb34b0d9a77eeaf0808972efab4cd1da41aa1d0f WatchSource:0}: Error finding container 59d86870b72cf489ed951148bb34b0d9a77eeaf0808972efab4cd1da41aa1d0f: Status 404 returned error can't find the container with id 59d86870b72cf489ed951148bb34b0d9a77eeaf0808972efab4cd1da41aa1d0f Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.122311 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.123790 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.126784 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.126831 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.126894 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.127018 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pk8nc" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.127249 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.127357 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.126828 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.146083 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.246912 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvmwp"] Nov 28 17:17:06 crc kubenswrapper[4710]: W1128 17:17:06.257185 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod735e6f86_ee65_44b8_b685_aa3cf331c533.slice/crio-4c2ca6eae6b067dc1c7e531f39e84e798e61740e562d62df42eebab0a0f777ac WatchSource:0}: Error finding container 4c2ca6eae6b067dc1c7e531f39e84e798e61740e562d62df42eebab0a0f777ac: Status 404 returned error can't find the container with id 4c2ca6eae6b067dc1c7e531f39e84e798e61740e562d62df42eebab0a0f777ac Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283053 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-config-data\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283194 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283238 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283367 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283391 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283439 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283476 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01f3773a-064e-4241-8327-758541098113-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283501 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283557 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283581 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlgt5\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-kube-api-access-mlgt5\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.283607 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01f3773a-064e-4241-8327-758541098113-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.387935 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01f3773a-064e-4241-8327-758541098113-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388021 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388145 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388176 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlgt5\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-kube-api-access-mlgt5\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388207 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01f3773a-064e-4241-8327-758541098113-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388286 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-config-data\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388436 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388458 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388534 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388552 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.388601 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.389280 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.390225 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.391127 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.391189 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.391502 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-config-data\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.392026 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.395211 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01f3773a-064e-4241-8327-758541098113-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.395324 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.396890 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.403242 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" event={"ID":"cee07afe-bef5-4d3d-afc5-80c629129a25","Type":"ContainerStarted","Data":"59d86870b72cf489ed951148bb34b0d9a77eeaf0808972efab4cd1da41aa1d0f"} Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.406216 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" event={"ID":"735e6f86-ee65-44b8-b685-aa3cf331c533","Type":"ContainerStarted","Data":"4c2ca6eae6b067dc1c7e531f39e84e798e61740e562d62df42eebab0a0f777ac"} Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.409195 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlgt5\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-kube-api-access-mlgt5\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.414322 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01f3773a-064e-4241-8327-758541098113-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.425647 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.454411 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.487308 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.488685 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.488759 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.512492 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.512744 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.513010 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m6x6q" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.513151 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.513489 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.513612 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.515013 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.619500 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f399c745-4f4e-44e8-8813-af3861dc0eb0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.619963 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620028 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620117 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620186 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620230 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f399c745-4f4e-44e8-8813-af3861dc0eb0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620290 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620313 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620342 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620378 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-458jd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-kube-api-access-458jd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.620614 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.721806 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f399c745-4f4e-44e8-8813-af3861dc0eb0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.721884 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.721909 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.721933 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.721966 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-458jd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-kube-api-access-458jd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.722041 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.722070 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f399c745-4f4e-44e8-8813-af3861dc0eb0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.722111 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.722151 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.722175 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.722211 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.722545 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.722801 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.724641 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.725672 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.730091 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f399c745-4f4e-44e8-8813-af3861dc0eb0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.730459 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f399c745-4f4e-44e8-8813-af3861dc0eb0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.732162 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.732259 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.738538 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.751422 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.759672 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-458jd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-kube-api-access-458jd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.777671 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:06 crc kubenswrapper[4710]: I1128 17:17:06.920798 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.004817 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.403821 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.415395 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01f3773a-064e-4241-8327-758541098113","Type":"ContainerStarted","Data":"3c9cc93b0c733783dfec3570d5ce9eeb5563117b18cf2ef28612fccce71ff93a"} Nov 28 17:17:07 crc kubenswrapper[4710]: W1128 17:17:07.495537 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf399c745_4f4e_44e8_8813_af3861dc0eb0.slice/crio-c9fed77c3bc7a8de4268d897e047b33d1360f45fd3facf4a3521ec31dcc9451c WatchSource:0}: Error finding container c9fed77c3bc7a8de4268d897e047b33d1360f45fd3facf4a3521ec31dcc9451c: Status 404 returned error can't find the container with id c9fed77c3bc7a8de4268d897e047b33d1360f45fd3facf4a3521ec31dcc9451c Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.843036 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.848356 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.852074 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-dz5zp" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.852082 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.856131 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.858150 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.859224 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.859461 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.942169 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa87ab33-407c-463c-8f9e-79eb5e55c981-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.942528 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.942559 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.942587 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qwcj\" (UniqueName: \"kubernetes.io/projected/aa87ab33-407c-463c-8f9e-79eb5e55c981-kube-api-access-4qwcj\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.944529 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.944777 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.944817 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa87ab33-407c-463c-8f9e-79eb5e55c981-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:07 crc kubenswrapper[4710]: I1128 17:17:07.944868 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa87ab33-407c-463c-8f9e-79eb5e55c981-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.046477 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.046530 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa87ab33-407c-463c-8f9e-79eb5e55c981-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.046567 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa87ab33-407c-463c-8f9e-79eb5e55c981-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.046605 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa87ab33-407c-463c-8f9e-79eb5e55c981-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.046632 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.046654 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.046678 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qwcj\" (UniqueName: \"kubernetes.io/projected/aa87ab33-407c-463c-8f9e-79eb5e55c981-kube-api-access-4qwcj\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.046771 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.047062 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.047600 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-kolla-config\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.048338 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-operator-scripts\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.048442 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aa87ab33-407c-463c-8f9e-79eb5e55c981-config-data-default\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.048650 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aa87ab33-407c-463c-8f9e-79eb5e55c981-config-data-generated\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.052141 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa87ab33-407c-463c-8f9e-79eb5e55c981-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.052454 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa87ab33-407c-463c-8f9e-79eb5e55c981-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.071696 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.075626 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qwcj\" (UniqueName: \"kubernetes.io/projected/aa87ab33-407c-463c-8f9e-79eb5e55c981-kube-api-access-4qwcj\") pod \"openstack-galera-0\" (UID: \"aa87ab33-407c-463c-8f9e-79eb5e55c981\") " pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.182910 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 28 17:17:08 crc kubenswrapper[4710]: I1128 17:17:08.424258 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f399c745-4f4e-44e8-8813-af3861dc0eb0","Type":"ContainerStarted","Data":"c9fed77c3bc7a8de4268d897e047b33d1360f45fd3facf4a3521ec31dcc9451c"} Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.134610 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.139843 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.152356 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-m7pqk" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.153205 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.153492 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.155704 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.166407 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.174955 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.175068 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.175242 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.176114 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/140993a2-eccd-471d-a0ce-df4600f96e20-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.176175 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.176251 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w544m\" (UniqueName: \"kubernetes.io/projected/140993a2-eccd-471d-a0ce-df4600f96e20-kube-api-access-w544m\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.176303 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140993a2-eccd-471d-a0ce-df4600f96e20-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.176575 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/140993a2-eccd-471d-a0ce-df4600f96e20-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.278442 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/140993a2-eccd-471d-a0ce-df4600f96e20-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.278503 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.278535 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.278569 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.278608 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/140993a2-eccd-471d-a0ce-df4600f96e20-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.278628 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.278656 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w544m\" (UniqueName: \"kubernetes.io/projected/140993a2-eccd-471d-a0ce-df4600f96e20-kube-api-access-w544m\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.278679 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140993a2-eccd-471d-a0ce-df4600f96e20-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.279082 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.279211 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/140993a2-eccd-471d-a0ce-df4600f96e20-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.279650 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.280105 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.280354 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/140993a2-eccd-471d-a0ce-df4600f96e20-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.290946 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/140993a2-eccd-471d-a0ce-df4600f96e20-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.292283 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/140993a2-eccd-471d-a0ce-df4600f96e20-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.302416 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w544m\" (UniqueName: \"kubernetes.io/projected/140993a2-eccd-471d-a0ce-df4600f96e20-kube-api-access-w544m\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.304744 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"140993a2-eccd-471d-a0ce-df4600f96e20\") " pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.483944 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.599163 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.600667 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.602736 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-b2cqv" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.603041 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.603816 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.615621 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.688017 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13db620f-d83a-4477-b98f-28c38017533c-config-data\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.688118 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db620f-d83a-4477-b98f-28c38017533c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.688161 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvc7d\" (UniqueName: \"kubernetes.io/projected/13db620f-d83a-4477-b98f-28c38017533c-kube-api-access-jvc7d\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.688192 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13db620f-d83a-4477-b98f-28c38017533c-kolla-config\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.688300 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/13db620f-d83a-4477-b98f-28c38017533c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.792534 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db620f-d83a-4477-b98f-28c38017533c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.792790 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvc7d\" (UniqueName: \"kubernetes.io/projected/13db620f-d83a-4477-b98f-28c38017533c-kube-api-access-jvc7d\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.792911 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13db620f-d83a-4477-b98f-28c38017533c-kolla-config\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.793114 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/13db620f-d83a-4477-b98f-28c38017533c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.793251 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13db620f-d83a-4477-b98f-28c38017533c-config-data\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.793703 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/13db620f-d83a-4477-b98f-28c38017533c-kolla-config\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.794353 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/13db620f-d83a-4477-b98f-28c38017533c-config-data\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.797688 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/13db620f-d83a-4477-b98f-28c38017533c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.799159 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13db620f-d83a-4477-b98f-28c38017533c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.811624 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvc7d\" (UniqueName: \"kubernetes.io/projected/13db620f-d83a-4477-b98f-28c38017533c-kube-api-access-jvc7d\") pod \"memcached-0\" (UID: \"13db620f-d83a-4477-b98f-28c38017533c\") " pod="openstack/memcached-0" Nov 28 17:17:09 crc kubenswrapper[4710]: I1128 17:17:09.923208 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 28 17:17:11 crc kubenswrapper[4710]: I1128 17:17:11.438391 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:17:11 crc kubenswrapper[4710]: I1128 17:17:11.439852 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:17:11 crc kubenswrapper[4710]: I1128 17:17:11.448438 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:17:11 crc kubenswrapper[4710]: I1128 17:17:11.452916 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-7pdzw" Nov 28 17:17:11 crc kubenswrapper[4710]: I1128 17:17:11.531422 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spg4j\" (UniqueName: \"kubernetes.io/projected/02a0cc30-b7bd-4e67-9aad-a4a895909384-kube-api-access-spg4j\") pod \"kube-state-metrics-0\" (UID: \"02a0cc30-b7bd-4e67-9aad-a4a895909384\") " pod="openstack/kube-state-metrics-0" Nov 28 17:17:11 crc kubenswrapper[4710]: I1128 17:17:11.632722 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spg4j\" (UniqueName: \"kubernetes.io/projected/02a0cc30-b7bd-4e67-9aad-a4a895909384-kube-api-access-spg4j\") pod \"kube-state-metrics-0\" (UID: \"02a0cc30-b7bd-4e67-9aad-a4a895909384\") " pod="openstack/kube-state-metrics-0" Nov 28 17:17:11 crc kubenswrapper[4710]: I1128 17:17:11.664659 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spg4j\" (UniqueName: \"kubernetes.io/projected/02a0cc30-b7bd-4e67-9aad-a4a895909384-kube-api-access-spg4j\") pod \"kube-state-metrics-0\" (UID: \"02a0cc30-b7bd-4e67-9aad-a4a895909384\") " pod="openstack/kube-state-metrics-0" Nov 28 17:17:11 crc kubenswrapper[4710]: I1128 17:17:11.762650 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:17:13 crc kubenswrapper[4710]: I1128 17:17:13.344015 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:17:13 crc kubenswrapper[4710]: I1128 17:17:13.344430 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.092488 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4h2ch"] Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.094299 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.096789 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.096951 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-ph6wl" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.097106 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.114440 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4h2ch"] Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.126514 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-t2rdj"] Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.128906 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.144759 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-t2rdj"] Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.189520 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkw9v\" (UniqueName: \"kubernetes.io/projected/c9a14e8a-2aba-4827-8ff4-48858bec6075-kube-api-access-wkw9v\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.189582 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-run\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.189623 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-run-ovn\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.189649 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a14e8a-2aba-4827-8ff4-48858bec6075-ovn-controller-tls-certs\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.189689 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a14e8a-2aba-4827-8ff4-48858bec6075-scripts\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.189875 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a14e8a-2aba-4827-8ff4-48858bec6075-combined-ca-bundle\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.189905 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-log-ovn\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.291857 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-run\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.292183 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-run\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.292325 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkw9v\" (UniqueName: \"kubernetes.io/projected/c9a14e8a-2aba-4827-8ff4-48858bec6075-kube-api-access-wkw9v\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.292457 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-run-ovn\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.292575 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a14e8a-2aba-4827-8ff4-48858bec6075-ovn-controller-tls-certs\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.292737 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-etc-ovs\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.292886 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a14e8a-2aba-4827-8ff4-48858bec6075-scripts\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.292994 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rzs\" (UniqueName: \"kubernetes.io/projected/8704135f-2602-4980-bdf2-875f4a9391e3-kube-api-access-p8rzs\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.293308 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a14e8a-2aba-4827-8ff4-48858bec6075-combined-ca-bundle\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.293437 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-lib\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.293546 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8704135f-2602-4980-bdf2-875f4a9391e3-scripts\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.293656 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-log-ovn\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.293802 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-log\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.292813 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-run\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.293835 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-log-ovn\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.293673 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/c9a14e8a-2aba-4827-8ff4-48858bec6075-var-run-ovn\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.297262 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c9a14e8a-2aba-4827-8ff4-48858bec6075-scripts\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.299333 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9a14e8a-2aba-4827-8ff4-48858bec6075-ovn-controller-tls-certs\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.299474 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9a14e8a-2aba-4827-8ff4-48858bec6075-combined-ca-bundle\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.314320 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkw9v\" (UniqueName: \"kubernetes.io/projected/c9a14e8a-2aba-4827-8ff4-48858bec6075-kube-api-access-wkw9v\") pod \"ovn-controller-4h2ch\" (UID: \"c9a14e8a-2aba-4827-8ff4-48858bec6075\") " pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.395574 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-run\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.395677 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-etc-ovs\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.395715 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8rzs\" (UniqueName: \"kubernetes.io/projected/8704135f-2602-4980-bdf2-875f4a9391e3-kube-api-access-p8rzs\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.395753 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-lib\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.395790 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8704135f-2602-4980-bdf2-875f4a9391e3-scripts\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.395824 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-log\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.396091 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-log\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.397251 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-run\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.397343 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-var-lib\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.397432 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/8704135f-2602-4980-bdf2-875f4a9391e3-etc-ovs\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.399041 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8704135f-2602-4980-bdf2-875f4a9391e3-scripts\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.413338 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8rzs\" (UniqueName: \"kubernetes.io/projected/8704135f-2602-4980-bdf2-875f4a9391e3-kube-api-access-p8rzs\") pod \"ovn-controller-ovs-t2rdj\" (UID: \"8704135f-2602-4980-bdf2-875f4a9391e3\") " pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.419789 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:16 crc kubenswrapper[4710]: I1128 17:17:16.447881 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.141120 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.145065 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.149434 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.149520 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.149652 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-kp45h" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.149878 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.150024 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.150550 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.231486 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8cv7\" (UniqueName: \"kubernetes.io/projected/4f8f21ee-4b67-4bd1-b46d-46c95015c134-kube-api-access-w8cv7\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.231911 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f8f21ee-4b67-4bd1-b46d-46c95015c134-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.231977 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8f21ee-4b67-4bd1-b46d-46c95015c134-config\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.232232 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.232255 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.232407 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f8f21ee-4b67-4bd1-b46d-46c95015c134-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.232450 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.232601 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.318627 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.320665 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.327287 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qdwlk" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.327372 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.327708 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.329671 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.333989 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335172 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8cv7\" (UniqueName: \"kubernetes.io/projected/4f8f21ee-4b67-4bd1-b46d-46c95015c134-kube-api-access-w8cv7\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335259 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f8f21ee-4b67-4bd1-b46d-46c95015c134-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335320 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335343 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8f21ee-4b67-4bd1-b46d-46c95015c134-config\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335365 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335393 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f8f21ee-4b67-4bd1-b46d-46c95015c134-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335417 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335442 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.335848 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.345641 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8f21ee-4b67-4bd1-b46d-46c95015c134-config\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.353889 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4f8f21ee-4b67-4bd1-b46d-46c95015c134-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.353999 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f8f21ee-4b67-4bd1-b46d-46c95015c134-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.360897 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.361213 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.371432 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8cv7\" (UniqueName: \"kubernetes.io/projected/4f8f21ee-4b67-4bd1-b46d-46c95015c134-kube-api-access-w8cv7\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.375223 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.379496 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f8f21ee-4b67-4bd1-b46d-46c95015c134-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"4f8f21ee-4b67-4bd1-b46d-46c95015c134\") " pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.437731 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgph\" (UniqueName: \"kubernetes.io/projected/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-kube-api-access-tfgph\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.437843 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.437873 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.437899 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-config\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.437935 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.437955 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.437987 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.438045 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.475549 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.539661 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgph\" (UniqueName: \"kubernetes.io/projected/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-kube-api-access-tfgph\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.539792 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.539826 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.539852 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-config\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.539892 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.539917 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.539947 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.540021 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.540261 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.547630 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.548185 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-config\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.549121 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.550222 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.550422 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.552531 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.562822 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgph\" (UniqueName: \"kubernetes.io/projected/05caeb9e-2c7b-4199-9bb9-3611e4eb3f21-kube-api-access-tfgph\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.574415 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21\") " pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:18 crc kubenswrapper[4710]: I1128 17:17:18.654103 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:24 crc kubenswrapper[4710]: I1128 17:17:24.122981 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:17:24 crc kubenswrapper[4710]: E1128 17:17:24.456970 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 17:17:24 crc kubenswrapper[4710]: E1128 17:17:24.457454 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8phzq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-4wphk_openstack(13a49183-f314-409c-b446-e085d2f10139): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:17:24 crc kubenswrapper[4710]: E1128 17:17:24.458933 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" podUID="13a49183-f314-409c-b446-e085d2f10139" Nov 28 17:17:24 crc kubenswrapper[4710]: E1128 17:17:24.499090 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 28 17:17:24 crc kubenswrapper[4710]: E1128 17:17:24.499240 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgfrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-wjvrq_openstack(c4d9daf2-3e4e-4de0-98f2-644bffb38269): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:17:24 crc kubenswrapper[4710]: E1128 17:17:24.500562 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" podUID="c4d9daf2-3e4e-4de0-98f2-644bffb38269" Nov 28 17:17:24 crc kubenswrapper[4710]: I1128 17:17:24.597611 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02a0cc30-b7bd-4e67-9aad-a4a895909384","Type":"ContainerStarted","Data":"09fe0892bd008f9b1384248a98f7dbb69c11d5b76c59886a002f015ee83ee0c9"} Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.219998 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.320085 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.329840 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.468270 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8phzq\" (UniqueName: \"kubernetes.io/projected/13a49183-f314-409c-b446-e085d2f10139-kube-api-access-8phzq\") pod \"13a49183-f314-409c-b446-e085d2f10139\" (UID: \"13a49183-f314-409c-b446-e085d2f10139\") " Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.468613 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a49183-f314-409c-b446-e085d2f10139-config\") pod \"13a49183-f314-409c-b446-e085d2f10139\" (UID: \"13a49183-f314-409c-b446-e085d2f10139\") " Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.468676 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-config\") pod \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.468703 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-dns-svc\") pod \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.468899 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/c4d9daf2-3e4e-4de0-98f2-644bffb38269-kube-api-access-mgfrv\") pod \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\" (UID: \"c4d9daf2-3e4e-4de0-98f2-644bffb38269\") " Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.469365 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-config" (OuterVolumeSpecName: "config") pod "c4d9daf2-3e4e-4de0-98f2-644bffb38269" (UID: "c4d9daf2-3e4e-4de0-98f2-644bffb38269"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.469378 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4d9daf2-3e4e-4de0-98f2-644bffb38269" (UID: "c4d9daf2-3e4e-4de0-98f2-644bffb38269"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.469599 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13a49183-f314-409c-b446-e085d2f10139-config" (OuterVolumeSpecName: "config") pod "13a49183-f314-409c-b446-e085d2f10139" (UID: "13a49183-f314-409c-b446-e085d2f10139"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.469632 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.469657 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d9daf2-3e4e-4de0-98f2-644bffb38269-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.474886 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d9daf2-3e4e-4de0-98f2-644bffb38269-kube-api-access-mgfrv" (OuterVolumeSpecName: "kube-api-access-mgfrv") pod "c4d9daf2-3e4e-4de0-98f2-644bffb38269" (UID: "c4d9daf2-3e4e-4de0-98f2-644bffb38269"). InnerVolumeSpecName "kube-api-access-mgfrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.475173 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13a49183-f314-409c-b446-e085d2f10139-kube-api-access-8phzq" (OuterVolumeSpecName: "kube-api-access-8phzq") pod "13a49183-f314-409c-b446-e085d2f10139" (UID: "13a49183-f314-409c-b446-e085d2f10139"). InnerVolumeSpecName "kube-api-access-8phzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.544682 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-t2rdj"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.572162 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgfrv\" (UniqueName: \"kubernetes.io/projected/c4d9daf2-3e4e-4de0-98f2-644bffb38269-kube-api-access-mgfrv\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.572191 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8phzq\" (UniqueName: \"kubernetes.io/projected/13a49183-f314-409c-b446-e085d2f10139-kube-api-access-8phzq\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.572205 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a49183-f314-409c-b446-e085d2f10139-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.575968 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.594135 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4h2ch"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.611790 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"13db620f-d83a-4477-b98f-28c38017533c","Type":"ContainerStarted","Data":"664445a50f3169de20a840ca689888084eeb32f7e1c741fee16db5062e24d767"} Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.613571 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" event={"ID":"13a49183-f314-409c-b446-e085d2f10139","Type":"ContainerDied","Data":"32bd5af390b0413d153a71319bdfa387e4fb94c40297432fe5ff071ee06b3c53"} Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.613672 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-4wphk" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.614867 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.621937 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.621935 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-wjvrq" event={"ID":"c4d9daf2-3e4e-4de0-98f2-644bffb38269","Type":"ContainerDied","Data":"eb119a3c74e1c5f8e6f973bc5db50fdc1229b0bc0ebe26e9855075952487bf82"} Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.642293 4710 generic.go:334] "Generic (PLEG): container finished" podID="735e6f86-ee65-44b8-b685-aa3cf331c533" containerID="1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f" exitCode=0 Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.642484 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" event={"ID":"735e6f86-ee65-44b8-b685-aa3cf331c533","Type":"ContainerDied","Data":"1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f"} Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.660604 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" event={"ID":"cee07afe-bef5-4d3d-afc5-80c629129a25","Type":"ContainerDied","Data":"870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84"} Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.660776 4710 generic.go:334] "Generic (PLEG): container finished" podID="cee07afe-bef5-4d3d-afc5-80c629129a25" containerID="870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84" exitCode=0 Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.677841 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.763618 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wjvrq"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.774175 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wjvrq"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.795487 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4wphk"] Nov 28 17:17:25 crc kubenswrapper[4710]: I1128 17:17:25.802207 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-4wphk"] Nov 28 17:17:25 crc kubenswrapper[4710]: W1128 17:17:25.904693 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9a14e8a_2aba_4827_8ff4_48858bec6075.slice/crio-49afd5e1b988a8b8442cf43d6e2ec9ccb423458d919665b3ccc3b31bb5718eec WatchSource:0}: Error finding container 49afd5e1b988a8b8442cf43d6e2ec9ccb423458d919665b3ccc3b31bb5718eec: Status 404 returned error can't find the container with id 49afd5e1b988a8b8442cf43d6e2ec9ccb423458d919665b3ccc3b31bb5718eec Nov 28 17:17:26 crc kubenswrapper[4710]: W1128 17:17:26.117324 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa87ab33_407c_463c_8f9e_79eb5e55c981.slice/crio-58199fb50b58f8916b2df556284c552ec85ec70cd8f010f9dc984200e8afe3f4 WatchSource:0}: Error finding container 58199fb50b58f8916b2df556284c552ec85ec70cd8f010f9dc984200e8afe3f4: Status 404 returned error can't find the container with id 58199fb50b58f8916b2df556284c552ec85ec70cd8f010f9dc984200e8afe3f4 Nov 28 17:17:26 crc kubenswrapper[4710]: I1128 17:17:26.235230 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 28 17:17:26 crc kubenswrapper[4710]: I1128 17:17:26.673594 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4h2ch" event={"ID":"c9a14e8a-2aba-4827-8ff4-48858bec6075","Type":"ContainerStarted","Data":"49afd5e1b988a8b8442cf43d6e2ec9ccb423458d919665b3ccc3b31bb5718eec"} Nov 28 17:17:26 crc kubenswrapper[4710]: I1128 17:17:26.675557 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f399c745-4f4e-44e8-8813-af3861dc0eb0","Type":"ContainerStarted","Data":"a547221951088401addaed6821940f14517efca1a5c55afed29e17422d05f3b6"} Nov 28 17:17:26 crc kubenswrapper[4710]: I1128 17:17:26.676798 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"140993a2-eccd-471d-a0ce-df4600f96e20","Type":"ContainerStarted","Data":"f9f286b846d72739590c004dae39f80a97d971218ea37a8aece56a1d968c708a"} Nov 28 17:17:26 crc kubenswrapper[4710]: I1128 17:17:26.678479 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01f3773a-064e-4241-8327-758541098113","Type":"ContainerStarted","Data":"2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f"} Nov 28 17:17:26 crc kubenswrapper[4710]: I1128 17:17:26.679811 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa87ab33-407c-463c-8f9e-79eb5e55c981","Type":"ContainerStarted","Data":"58199fb50b58f8916b2df556284c552ec85ec70cd8f010f9dc984200e8afe3f4"} Nov 28 17:17:26 crc kubenswrapper[4710]: I1128 17:17:26.681218 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t2rdj" event={"ID":"8704135f-2602-4980-bdf2-875f4a9391e3","Type":"ContainerStarted","Data":"632018304c707e4bf3c6cca2cfab96d6ff72ce57bdcc94cd7364ff18e31acf33"} Nov 28 17:17:26 crc kubenswrapper[4710]: W1128 17:17:26.887171 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05caeb9e_2c7b_4199_9bb9_3611e4eb3f21.slice/crio-97c2aa8a18c833f18b431d88788d8549975b4d2cebfb3d9ae9980cac7d2db9b9 WatchSource:0}: Error finding container 97c2aa8a18c833f18b431d88788d8549975b4d2cebfb3d9ae9980cac7d2db9b9: Status 404 returned error can't find the container with id 97c2aa8a18c833f18b431d88788d8549975b4d2cebfb3d9ae9980cac7d2db9b9 Nov 28 17:17:27 crc kubenswrapper[4710]: I1128 17:17:27.160309 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13a49183-f314-409c-b446-e085d2f10139" path="/var/lib/kubelet/pods/13a49183-f314-409c-b446-e085d2f10139/volumes" Nov 28 17:17:27 crc kubenswrapper[4710]: I1128 17:17:27.161254 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d9daf2-3e4e-4de0-98f2-644bffb38269" path="/var/lib/kubelet/pods/c4d9daf2-3e4e-4de0-98f2-644bffb38269/volumes" Nov 28 17:17:27 crc kubenswrapper[4710]: I1128 17:17:27.698282 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21","Type":"ContainerStarted","Data":"97c2aa8a18c833f18b431d88788d8549975b4d2cebfb3d9ae9980cac7d2db9b9"} Nov 28 17:17:27 crc kubenswrapper[4710]: I1128 17:17:27.699533 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4f8f21ee-4b67-4bd1-b46d-46c95015c134","Type":"ContainerStarted","Data":"dd2f3f5c595aa923b37c3615dd95d6a056161371185072ce126a92f9ba08f4ab"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.803146 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02a0cc30-b7bd-4e67-9aad-a4a895909384","Type":"ContainerStarted","Data":"82e30b277816c509cbf159b8d022dcdb19ca69df8dd65c6a2d4237d41a279506"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.804634 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.809163 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21","Type":"ContainerStarted","Data":"cfcd3ccdd1b7d157ecfb50a3fd599e624df292879e953d13908c04b82791efa5"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.811473 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t2rdj" event={"ID":"8704135f-2602-4980-bdf2-875f4a9391e3","Type":"ContainerStarted","Data":"82d96fa8881f27c6beaa24c8975eb9eb1ccacc6cac43981bf157e1d54912195e"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.815145 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"13db620f-d83a-4477-b98f-28c38017533c","Type":"ContainerStarted","Data":"57dcf08eb789588967a3de2fad14fbec781b50ddfd449b42a4189d8eb070b76b"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.815387 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.822844 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"140993a2-eccd-471d-a0ce-df4600f96e20","Type":"ContainerStarted","Data":"377a19a27a7771797822d2f3c8d82aed4032495af012a255bb76b3b14dfc06e5"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.832043 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4f8f21ee-4b67-4bd1-b46d-46c95015c134","Type":"ContainerStarted","Data":"7a9cd5bcbd5ea74b4a71725f5fe3708ddc33f542fe88ab286959371f454c1b40"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.838538 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=14.038854586 podStartE2EDuration="27.838514946s" podCreationTimestamp="2025-11-28 17:17:11 +0000 UTC" firstStartedPulling="2025-11-28 17:17:24.474577854 +0000 UTC m=+1133.732877899" lastFinishedPulling="2025-11-28 17:17:38.274238194 +0000 UTC m=+1147.532538259" observedRunningTime="2025-11-28 17:17:38.832910199 +0000 UTC m=+1148.091210254" watchObservedRunningTime="2025-11-28 17:17:38.838514946 +0000 UTC m=+1148.096814991" Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.854514 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa87ab33-407c-463c-8f9e-79eb5e55c981","Type":"ContainerStarted","Data":"e2e274d1d31cc0cf54ce2bddb6b18cc60c3c438b504c5e48cbc476ea7abfef80"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.877476 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" event={"ID":"735e6f86-ee65-44b8-b685-aa3cf331c533","Type":"ContainerStarted","Data":"477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.878309 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.890048 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=17.364222497 podStartE2EDuration="29.890025588s" podCreationTimestamp="2025-11-28 17:17:09 +0000 UTC" firstStartedPulling="2025-11-28 17:17:25.363639159 +0000 UTC m=+1134.621939204" lastFinishedPulling="2025-11-28 17:17:37.88944225 +0000 UTC m=+1147.147742295" observedRunningTime="2025-11-28 17:17:38.886203548 +0000 UTC m=+1148.144503593" watchObservedRunningTime="2025-11-28 17:17:38.890025588 +0000 UTC m=+1148.148325633" Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.896795 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" event={"ID":"cee07afe-bef5-4d3d-afc5-80c629129a25","Type":"ContainerStarted","Data":"f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c"} Nov 28 17:17:38 crc kubenswrapper[4710]: I1128 17:17:38.897794 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:39 crc kubenswrapper[4710]: I1128 17:17:39.028860 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" podStartSLOduration=16.355976601 podStartE2EDuration="35.028841918s" podCreationTimestamp="2025-11-28 17:17:04 +0000 UTC" firstStartedPulling="2025-11-28 17:17:05.943966456 +0000 UTC m=+1115.202266501" lastFinishedPulling="2025-11-28 17:17:24.616831783 +0000 UTC m=+1133.875131818" observedRunningTime="2025-11-28 17:17:39.021166764 +0000 UTC m=+1148.279466809" watchObservedRunningTime="2025-11-28 17:17:39.028841918 +0000 UTC m=+1148.287141963" Nov 28 17:17:39 crc kubenswrapper[4710]: I1128 17:17:39.049683 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" podStartSLOduration=15.688647063 podStartE2EDuration="34.049664837s" podCreationTimestamp="2025-11-28 17:17:05 +0000 UTC" firstStartedPulling="2025-11-28 17:17:06.260187537 +0000 UTC m=+1115.518487582" lastFinishedPulling="2025-11-28 17:17:24.621205301 +0000 UTC m=+1133.879505356" observedRunningTime="2025-11-28 17:17:39.043464391 +0000 UTC m=+1148.301764426" watchObservedRunningTime="2025-11-28 17:17:39.049664837 +0000 UTC m=+1148.307964872" Nov 28 17:17:39 crc kubenswrapper[4710]: I1128 17:17:39.909520 4710 generic.go:334] "Generic (PLEG): container finished" podID="8704135f-2602-4980-bdf2-875f4a9391e3" containerID="82d96fa8881f27c6beaa24c8975eb9eb1ccacc6cac43981bf157e1d54912195e" exitCode=0 Nov 28 17:17:39 crc kubenswrapper[4710]: I1128 17:17:39.909583 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t2rdj" event={"ID":"8704135f-2602-4980-bdf2-875f4a9391e3","Type":"ContainerDied","Data":"82d96fa8881f27c6beaa24c8975eb9eb1ccacc6cac43981bf157e1d54912195e"} Nov 28 17:17:39 crc kubenswrapper[4710]: I1128 17:17:39.919266 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4h2ch" event={"ID":"c9a14e8a-2aba-4827-8ff4-48858bec6075","Type":"ContainerStarted","Data":"e2b1030fcdffe0e6009f8d77896623caa87711228e7b623d376c8baf3a4f8701"} Nov 28 17:17:39 crc kubenswrapper[4710]: I1128 17:17:39.975942 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4h2ch" podStartSLOduration=11.60883293 podStartE2EDuration="23.975919971s" podCreationTimestamp="2025-11-28 17:17:16 +0000 UTC" firstStartedPulling="2025-11-28 17:17:25.907849675 +0000 UTC m=+1135.166149720" lastFinishedPulling="2025-11-28 17:17:38.274936706 +0000 UTC m=+1147.533236761" observedRunningTime="2025-11-28 17:17:39.952318893 +0000 UTC m=+1149.210618948" watchObservedRunningTime="2025-11-28 17:17:39.975919971 +0000 UTC m=+1149.234220026" Nov 28 17:17:40 crc kubenswrapper[4710]: I1128 17:17:40.932602 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t2rdj" event={"ID":"8704135f-2602-4980-bdf2-875f4a9391e3","Type":"ContainerStarted","Data":"305426237ec3de07f2fa8dfcfa3805488c4a73794420c2b0df70e81bbc6dcbb1"} Nov 28 17:17:40 crc kubenswrapper[4710]: I1128 17:17:40.933427 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4h2ch" Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.962365 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"4f8f21ee-4b67-4bd1-b46d-46c95015c134","Type":"ContainerStarted","Data":"8398dbe86769b8cb2d503bc70e352bc22a76579840cacf25456bef45fb3febaa"} Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.965327 4710 generic.go:334] "Generic (PLEG): container finished" podID="aa87ab33-407c-463c-8f9e-79eb5e55c981" containerID="e2e274d1d31cc0cf54ce2bddb6b18cc60c3c438b504c5e48cbc476ea7abfef80" exitCode=0 Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.965382 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa87ab33-407c-463c-8f9e-79eb5e55c981","Type":"ContainerDied","Data":"e2e274d1d31cc0cf54ce2bddb6b18cc60c3c438b504c5e48cbc476ea7abfef80"} Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.968745 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"05caeb9e-2c7b-4199-9bb9-3611e4eb3f21","Type":"ContainerStarted","Data":"ccfd093fc2747d0a1cf2afafd01c6c83cfc917c0dd57d01763e7655f2e226950"} Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.973806 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-t2rdj" event={"ID":"8704135f-2602-4980-bdf2-875f4a9391e3","Type":"ContainerStarted","Data":"0a45cdb502c00759f2c45d5ae4a5292ef305b92de2de406877481472d2119cee"} Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.974552 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.974584 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.977869 4710 generic.go:334] "Generic (PLEG): container finished" podID="140993a2-eccd-471d-a0ce-df4600f96e20" containerID="377a19a27a7771797822d2f3c8d82aed4032495af012a255bb76b3b14dfc06e5" exitCode=0 Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.977913 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"140993a2-eccd-471d-a0ce-df4600f96e20","Type":"ContainerDied","Data":"377a19a27a7771797822d2f3c8d82aed4032495af012a255bb76b3b14dfc06e5"} Nov 28 17:17:42 crc kubenswrapper[4710]: I1128 17:17:42.988003 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=11.099934464 podStartE2EDuration="25.987984535s" podCreationTimestamp="2025-11-28 17:17:17 +0000 UTC" firstStartedPulling="2025-11-28 17:17:26.883814985 +0000 UTC m=+1136.142115030" lastFinishedPulling="2025-11-28 17:17:41.771865056 +0000 UTC m=+1151.030165101" observedRunningTime="2025-11-28 17:17:42.978723872 +0000 UTC m=+1152.237023927" watchObservedRunningTime="2025-11-28 17:17:42.987984535 +0000 UTC m=+1152.246284580" Nov 28 17:17:43 crc kubenswrapper[4710]: I1128 17:17:43.016969 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-t2rdj" podStartSLOduration=14.559256121 podStartE2EDuration="27.016947763s" podCreationTimestamp="2025-11-28 17:17:16 +0000 UTC" firstStartedPulling="2025-11-28 17:17:25.797894181 +0000 UTC m=+1135.056194236" lastFinishedPulling="2025-11-28 17:17:38.255585793 +0000 UTC m=+1147.513885878" observedRunningTime="2025-11-28 17:17:43.006852744 +0000 UTC m=+1152.265152809" watchObservedRunningTime="2025-11-28 17:17:43.016947763 +0000 UTC m=+1152.275247808" Nov 28 17:17:43 crc kubenswrapper[4710]: I1128 17:17:43.076720 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=11.191657041 podStartE2EDuration="26.076701137s" podCreationTimestamp="2025-11-28 17:17:17 +0000 UTC" firstStartedPulling="2025-11-28 17:17:26.891206589 +0000 UTC m=+1136.149506634" lastFinishedPulling="2025-11-28 17:17:41.776250685 +0000 UTC m=+1151.034550730" observedRunningTime="2025-11-28 17:17:43.074381184 +0000 UTC m=+1152.332681229" watchObservedRunningTime="2025-11-28 17:17:43.076701137 +0000 UTC m=+1152.335001182" Nov 28 17:17:43 crc kubenswrapper[4710]: I1128 17:17:43.343746 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:17:43 crc kubenswrapper[4710]: I1128 17:17:43.344179 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:17:43 crc kubenswrapper[4710]: I1128 17:17:43.476692 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:43 crc kubenswrapper[4710]: I1128 17:17:43.655176 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:43 crc kubenswrapper[4710]: I1128 17:17:43.990991 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"aa87ab33-407c-463c-8f9e-79eb5e55c981","Type":"ContainerStarted","Data":"f3aa671672885f8155c7126c3ea42c0e838e3919826957c37f377bbc93811050"} Nov 28 17:17:43 crc kubenswrapper[4710]: I1128 17:17:43.994977 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"140993a2-eccd-471d-a0ce-df4600f96e20","Type":"ContainerStarted","Data":"d2a01ffe897a8643a1bb35a84866e709d2ed77675cbdd50d8a1ad4718ccb9708"} Nov 28 17:17:44 crc kubenswrapper[4710]: I1128 17:17:44.035482 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.272504475 podStartE2EDuration="38.03545167s" podCreationTimestamp="2025-11-28 17:17:06 +0000 UTC" firstStartedPulling="2025-11-28 17:17:26.126864666 +0000 UTC m=+1135.385164731" lastFinishedPulling="2025-11-28 17:17:37.889811881 +0000 UTC m=+1147.148111926" observedRunningTime="2025-11-28 17:17:44.018956477 +0000 UTC m=+1153.277256602" watchObservedRunningTime="2025-11-28 17:17:44.03545167 +0000 UTC m=+1153.293751755" Nov 28 17:17:44 crc kubenswrapper[4710]: I1128 17:17:44.048358 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.538176675 podStartE2EDuration="36.048338959s" podCreationTimestamp="2025-11-28 17:17:08 +0000 UTC" firstStartedPulling="2025-11-28 17:17:25.680658006 +0000 UTC m=+1134.938958051" lastFinishedPulling="2025-11-28 17:17:38.19082029 +0000 UTC m=+1147.449120335" observedRunningTime="2025-11-28 17:17:44.047696919 +0000 UTC m=+1153.305996994" watchObservedRunningTime="2025-11-28 17:17:44.048338959 +0000 UTC m=+1153.306639014" Nov 28 17:17:44 crc kubenswrapper[4710]: I1128 17:17:44.925231 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 28 17:17:45 crc kubenswrapper[4710]: I1128 17:17:45.337958 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:45 crc kubenswrapper[4710]: I1128 17:17:45.476475 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:45 crc kubenswrapper[4710]: I1128 17:17:45.533438 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:45 crc kubenswrapper[4710]: I1128 17:17:45.654911 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:45 crc kubenswrapper[4710]: I1128 17:17:45.691477 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:45 crc kubenswrapper[4710]: I1128 17:17:45.786248 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:17:45 crc kubenswrapper[4710]: I1128 17:17:45.836579 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2rr5n"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.012807 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" podUID="cee07afe-bef5-4d3d-afc5-80c629129a25" containerName="dnsmasq-dns" containerID="cri-o://f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c" gracePeriod=10 Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.063383 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.067404 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.336651 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-4gw2q"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.338371 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.340658 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.357146 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-4gw2q"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.416675 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-48css"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.421126 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.433031 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.433828 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1cd28302-c515-4e75-8092-cc99b132bc7e-ovs-rundir\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.433941 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.433969 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1cd28302-c515-4e75-8092-cc99b132bc7e-ovn-rundir\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.433991 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd28302-c515-4e75-8092-cc99b132bc7e-combined-ca-bundle\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.434056 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvz4b\" (UniqueName: \"kubernetes.io/projected/4da74cb4-e394-44d8-b75d-15e2b8454456-kube-api-access-mvz4b\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.434081 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd28302-c515-4e75-8092-cc99b132bc7e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.434100 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-config\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.434117 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.434138 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd28302-c515-4e75-8092-cc99b132bc7e-config\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.434158 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn9t9\" (UniqueName: \"kubernetes.io/projected/1cd28302-c515-4e75-8092-cc99b132bc7e-kube-api-access-xn9t9\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.437428 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-48css"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536054 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1cd28302-c515-4e75-8092-cc99b132bc7e-ovs-rundir\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536161 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536180 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1cd28302-c515-4e75-8092-cc99b132bc7e-ovn-rundir\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536199 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd28302-c515-4e75-8092-cc99b132bc7e-combined-ca-bundle\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536255 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvz4b\" (UniqueName: \"kubernetes.io/projected/4da74cb4-e394-44d8-b75d-15e2b8454456-kube-api-access-mvz4b\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536277 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd28302-c515-4e75-8092-cc99b132bc7e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536299 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-config\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536315 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536338 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd28302-c515-4e75-8092-cc99b132bc7e-config\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.536358 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn9t9\" (UniqueName: \"kubernetes.io/projected/1cd28302-c515-4e75-8092-cc99b132bc7e-kube-api-access-xn9t9\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.537065 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/1cd28302-c515-4e75-8092-cc99b132bc7e-ovs-rundir\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.537583 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/1cd28302-c515-4e75-8092-cc99b132bc7e-ovn-rundir\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.537643 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.537733 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.537861 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-config\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.538339 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cd28302-c515-4e75-8092-cc99b132bc7e-config\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.543071 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cd28302-c515-4e75-8092-cc99b132bc7e-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.544655 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cd28302-c515-4e75-8092-cc99b132bc7e-combined-ca-bundle\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.563666 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvz4b\" (UniqueName: \"kubernetes.io/projected/4da74cb4-e394-44d8-b75d-15e2b8454456-kube-api-access-mvz4b\") pod \"dnsmasq-dns-5bf47b49b7-4gw2q\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.570339 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn9t9\" (UniqueName: \"kubernetes.io/projected/1cd28302-c515-4e75-8092-cc99b132bc7e-kube-api-access-xn9t9\") pod \"ovn-controller-metrics-48css\" (UID: \"1cd28302-c515-4e75-8092-cc99b132bc7e\") " pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.619973 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-4gw2q"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.620643 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.660553 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.664920 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-s66fr"] Nov 28 17:17:46 crc kubenswrapper[4710]: E1128 17:17:46.665410 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee07afe-bef5-4d3d-afc5-80c629129a25" containerName="init" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.665432 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee07afe-bef5-4d3d-afc5-80c629129a25" containerName="init" Nov 28 17:17:46 crc kubenswrapper[4710]: E1128 17:17:46.665457 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee07afe-bef5-4d3d-afc5-80c629129a25" containerName="dnsmasq-dns" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.665465 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee07afe-bef5-4d3d-afc5-80c629129a25" containerName="dnsmasq-dns" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.665669 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee07afe-bef5-4d3d-afc5-80c629129a25" containerName="dnsmasq-dns" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.666937 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.674359 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.682708 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s66fr"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.738393 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-dns-svc\") pod \"cee07afe-bef5-4d3d-afc5-80c629129a25\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.738457 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ggcg\" (UniqueName: \"kubernetes.io/projected/cee07afe-bef5-4d3d-afc5-80c629129a25-kube-api-access-9ggcg\") pod \"cee07afe-bef5-4d3d-afc5-80c629129a25\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.738508 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-config\") pod \"cee07afe-bef5-4d3d-afc5-80c629129a25\" (UID: \"cee07afe-bef5-4d3d-afc5-80c629129a25\") " Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.739236 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-config\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.739290 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gssv\" (UniqueName: \"kubernetes.io/projected/fdf4441a-0900-45fc-b59a-0e8939d339b3-kube-api-access-6gssv\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.739330 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.739365 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-dns-svc\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.739479 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.746354 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee07afe-bef5-4d3d-afc5-80c629129a25-kube-api-access-9ggcg" (OuterVolumeSpecName: "kube-api-access-9ggcg") pod "cee07afe-bef5-4d3d-afc5-80c629129a25" (UID: "cee07afe-bef5-4d3d-afc5-80c629129a25"). InnerVolumeSpecName "kube-api-access-9ggcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.750948 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.767330 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-48css" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.800664 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cee07afe-bef5-4d3d-afc5-80c629129a25" (UID: "cee07afe-bef5-4d3d-afc5-80c629129a25"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.812067 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.822645 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-config" (OuterVolumeSpecName: "config") pod "cee07afe-bef5-4d3d-afc5-80c629129a25" (UID: "cee07afe-bef5-4d3d-afc5-80c629129a25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.825070 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.825321 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.825326 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-zjqb8" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.825650 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.848692 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-config\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.851330 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gssv\" (UniqueName: \"kubernetes.io/projected/fdf4441a-0900-45fc-b59a-0e8939d339b3-kube-api-access-6gssv\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.851465 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfch\" (UniqueName: \"kubernetes.io/projected/88021b14-adad-452b-af97-74186171d987-kube-api-access-9wfch\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.851543 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.849602 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-config\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.851752 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.852000 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-dns-svc\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.852161 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88021b14-adad-452b-af97-74186171d987-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.852424 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.852521 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88021b14-adad-452b-af97-74186171d987-scripts\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.852634 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.852743 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.852894 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-dns-svc\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.852551 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.853309 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.857633 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.858306 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88021b14-adad-452b-af97-74186171d987-config\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.858422 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.858437 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ggcg\" (UniqueName: \"kubernetes.io/projected/cee07afe-bef5-4d3d-afc5-80c629129a25-kube-api-access-9ggcg\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.858449 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cee07afe-bef5-4d3d-afc5-80c629129a25-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.873555 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gssv\" (UniqueName: \"kubernetes.io/projected/fdf4441a-0900-45fc-b59a-0e8939d339b3-kube-api-access-6gssv\") pod \"dnsmasq-dns-8554648995-s66fr\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.960370 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88021b14-adad-452b-af97-74186171d987-config\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.960430 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfch\" (UniqueName: \"kubernetes.io/projected/88021b14-adad-452b-af97-74186171d987-kube-api-access-9wfch\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.960454 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.960552 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88021b14-adad-452b-af97-74186171d987-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.960612 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.960638 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88021b14-adad-452b-af97-74186171d987-scripts\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.961392 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88021b14-adad-452b-af97-74186171d987-config\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.962430 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/88021b14-adad-452b-af97-74186171d987-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.963522 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.963896 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88021b14-adad-452b-af97-74186171d987-scripts\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.965423 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.966495 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.966710 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/88021b14-adad-452b-af97-74186171d987-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:46 crc kubenswrapper[4710]: I1128 17:17:46.982795 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfch\" (UniqueName: \"kubernetes.io/projected/88021b14-adad-452b-af97-74186171d987-kube-api-access-9wfch\") pod \"ovn-northd-0\" (UID: \"88021b14-adad-452b-af97-74186171d987\") " pod="openstack/ovn-northd-0" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.025857 4710 generic.go:334] "Generic (PLEG): container finished" podID="cee07afe-bef5-4d3d-afc5-80c629129a25" containerID="f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c" exitCode=0 Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.026019 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.026056 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" event={"ID":"cee07afe-bef5-4d3d-afc5-80c629129a25","Type":"ContainerDied","Data":"f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c"} Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.026102 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2rr5n" event={"ID":"cee07afe-bef5-4d3d-afc5-80c629129a25","Type":"ContainerDied","Data":"59d86870b72cf489ed951148bb34b0d9a77eeaf0808972efab4cd1da41aa1d0f"} Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.026119 4710 scope.go:117] "RemoveContainer" containerID="f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.065086 4710 scope.go:117] "RemoveContainer" containerID="870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.070594 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2rr5n"] Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.077492 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2rr5n"] Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.088140 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.141492 4710 scope.go:117] "RemoveContainer" containerID="f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.144239 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 28 17:17:47 crc kubenswrapper[4710]: E1128 17:17:47.144772 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c\": container with ID starting with f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c not found: ID does not exist" containerID="f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.144802 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c"} err="failed to get container status \"f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c\": rpc error: code = NotFound desc = could not find container \"f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c\": container with ID starting with f514fe236e1a624853c5f29603ffa39233f91bb3e8e16114dfcb2bd5b2d3200c not found: ID does not exist" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.144827 4710 scope.go:117] "RemoveContainer" containerID="870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84" Nov 28 17:17:47 crc kubenswrapper[4710]: E1128 17:17:47.150902 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84\": container with ID starting with 870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84 not found: ID does not exist" containerID="870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.150942 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84"} err="failed to get container status \"870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84\": rpc error: code = NotFound desc = could not find container \"870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84\": container with ID starting with 870240fecb5ac5c004dc3e8450159359f2197d6e70d9bee0251b575084833f84 not found: ID does not exist" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.189612 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee07afe-bef5-4d3d-afc5-80c629129a25" path="/var/lib/kubelet/pods/cee07afe-bef5-4d3d-afc5-80c629129a25/volumes" Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.196125 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-4gw2q"] Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.505092 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-48css"] Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.627613 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 28 17:17:47 crc kubenswrapper[4710]: I1128 17:17:47.840942 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s66fr"] Nov 28 17:17:47 crc kubenswrapper[4710]: W1128 17:17:47.846792 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdf4441a_0900_45fc_b59a_0e8939d339b3.slice/crio-e79e7daab2d4679f5c44c704a8556f217db3171723f8a4908b86cfc84e4b8313 WatchSource:0}: Error finding container e79e7daab2d4679f5c44c704a8556f217db3171723f8a4908b86cfc84e4b8313: Status 404 returned error can't find the container with id e79e7daab2d4679f5c44c704a8556f217db3171723f8a4908b86cfc84e4b8313 Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.035339 4710 generic.go:334] "Generic (PLEG): container finished" podID="4da74cb4-e394-44d8-b75d-15e2b8454456" containerID="c7072230e63eb579904b3ace14ffe8df834fba63aad68f9c68c220821414afd1" exitCode=0 Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.035394 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" event={"ID":"4da74cb4-e394-44d8-b75d-15e2b8454456","Type":"ContainerDied","Data":"c7072230e63eb579904b3ace14ffe8df834fba63aad68f9c68c220821414afd1"} Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.035696 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" event={"ID":"4da74cb4-e394-44d8-b75d-15e2b8454456","Type":"ContainerStarted","Data":"15a46f1141de737e9a4b07fc1f575c590a4c1871e1412b64858ec4d3fa42356d"} Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.037985 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-48css" event={"ID":"1cd28302-c515-4e75-8092-cc99b132bc7e","Type":"ContainerStarted","Data":"b0a17e41532b20f2c81a416113abd90ac1be8d30d1be0d842cb71d8ca141daf7"} Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.044116 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"88021b14-adad-452b-af97-74186171d987","Type":"ContainerStarted","Data":"a25e00c0da26d587b3d740a64962bc38e1ba0264ef8057515c629cf38cca2996"} Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.049890 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s66fr" event={"ID":"fdf4441a-0900-45fc-b59a-0e8939d339b3","Type":"ContainerStarted","Data":"e79e7daab2d4679f5c44c704a8556f217db3171723f8a4908b86cfc84e4b8313"} Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.182988 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.183040 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.372477 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:48 crc kubenswrapper[4710]: E1128 17:17:48.378733 4710 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdf4441a_0900_45fc_b59a_0e8939d339b3.slice/crio-d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.519469 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-config\") pod \"4da74cb4-e394-44d8-b75d-15e2b8454456\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.519618 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvz4b\" (UniqueName: \"kubernetes.io/projected/4da74cb4-e394-44d8-b75d-15e2b8454456-kube-api-access-mvz4b\") pod \"4da74cb4-e394-44d8-b75d-15e2b8454456\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.519813 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-ovsdbserver-nb\") pod \"4da74cb4-e394-44d8-b75d-15e2b8454456\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.519869 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-dns-svc\") pod \"4da74cb4-e394-44d8-b75d-15e2b8454456\" (UID: \"4da74cb4-e394-44d8-b75d-15e2b8454456\") " Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.527027 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4da74cb4-e394-44d8-b75d-15e2b8454456-kube-api-access-mvz4b" (OuterVolumeSpecName: "kube-api-access-mvz4b") pod "4da74cb4-e394-44d8-b75d-15e2b8454456" (UID: "4da74cb4-e394-44d8-b75d-15e2b8454456"). InnerVolumeSpecName "kube-api-access-mvz4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.544918 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-config" (OuterVolumeSpecName: "config") pod "4da74cb4-e394-44d8-b75d-15e2b8454456" (UID: "4da74cb4-e394-44d8-b75d-15e2b8454456"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.545330 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4da74cb4-e394-44d8-b75d-15e2b8454456" (UID: "4da74cb4-e394-44d8-b75d-15e2b8454456"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.551266 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4da74cb4-e394-44d8-b75d-15e2b8454456" (UID: "4da74cb4-e394-44d8-b75d-15e2b8454456"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.622723 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.622778 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.622797 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4da74cb4-e394-44d8-b75d-15e2b8454456-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:48 crc kubenswrapper[4710]: I1128 17:17:48.622812 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvz4b\" (UniqueName: \"kubernetes.io/projected/4da74cb4-e394-44d8-b75d-15e2b8454456-kube-api-access-mvz4b\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.060198 4710 generic.go:334] "Generic (PLEG): container finished" podID="fdf4441a-0900-45fc-b59a-0e8939d339b3" containerID="d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f" exitCode=0 Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.060247 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s66fr" event={"ID":"fdf4441a-0900-45fc-b59a-0e8939d339b3","Type":"ContainerDied","Data":"d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f"} Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.062459 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" event={"ID":"4da74cb4-e394-44d8-b75d-15e2b8454456","Type":"ContainerDied","Data":"15a46f1141de737e9a4b07fc1f575c590a4c1871e1412b64858ec4d3fa42356d"} Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.062508 4710 scope.go:117] "RemoveContainer" containerID="c7072230e63eb579904b3ace14ffe8df834fba63aad68f9c68c220821414afd1" Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.062629 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-4gw2q" Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.068917 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-48css" event={"ID":"1cd28302-c515-4e75-8092-cc99b132bc7e","Type":"ContainerStarted","Data":"16a475ba05b3be42116f820652ee74fca5822b3ad83f208cb2e0e3f3e8b66d6c"} Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.127854 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-4gw2q"] Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.146899 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-48css" podStartSLOduration=3.146881455 podStartE2EDuration="3.146881455s" podCreationTimestamp="2025-11-28 17:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:17:49.131544489 +0000 UTC m=+1158.389844534" watchObservedRunningTime="2025-11-28 17:17:49.146881455 +0000 UTC m=+1158.405181500" Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.159947 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-4gw2q"] Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.484717 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.484771 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:49 crc kubenswrapper[4710]: I1128 17:17:49.580159 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.079520 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"88021b14-adad-452b-af97-74186171d987","Type":"ContainerStarted","Data":"c529aba93f691491459b71b70021cf1ac8a3d77191c4c7d019753a4810e6a0cb"} Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.079613 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"88021b14-adad-452b-af97-74186171d987","Type":"ContainerStarted","Data":"c83842eae746f47cc5bf9467f15d6228afbe1de718d001929d0f63a91f5ec2d1"} Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.079786 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.081370 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s66fr" event={"ID":"fdf4441a-0900-45fc-b59a-0e8939d339b3","Type":"ContainerStarted","Data":"4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d"} Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.081530 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.101225 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.275905784 podStartE2EDuration="4.101200989s" podCreationTimestamp="2025-11-28 17:17:46 +0000 UTC" firstStartedPulling="2025-11-28 17:17:47.649671277 +0000 UTC m=+1156.907971312" lastFinishedPulling="2025-11-28 17:17:49.474966472 +0000 UTC m=+1158.733266517" observedRunningTime="2025-11-28 17:17:50.096731266 +0000 UTC m=+1159.355031311" watchObservedRunningTime="2025-11-28 17:17:50.101200989 +0000 UTC m=+1159.359501034" Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.119452 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-s66fr" podStartSLOduration=4.119433006 podStartE2EDuration="4.119433006s" podCreationTimestamp="2025-11-28 17:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:17:50.113076224 +0000 UTC m=+1159.371376289" watchObservedRunningTime="2025-11-28 17:17:50.119433006 +0000 UTC m=+1159.377733051" Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.173106 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 28 17:17:50 crc kubenswrapper[4710]: I1128 17:17:50.940191 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.034049 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.158988 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4da74cb4-e394-44d8-b75d-15e2b8454456" path="/var/lib/kubelet/pods/4da74cb4-e394-44d8-b75d-15e2b8454456/volumes" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.789886 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.867589 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s66fr"] Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.905815 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fkpk9"] Nov 28 17:17:51 crc kubenswrapper[4710]: E1128 17:17:51.906323 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4da74cb4-e394-44d8-b75d-15e2b8454456" containerName="init" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.906348 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="4da74cb4-e394-44d8-b75d-15e2b8454456" containerName="init" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.906557 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="4da74cb4-e394-44d8-b75d-15e2b8454456" containerName="init" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.907826 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.915267 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fkpk9"] Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.994050 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.994140 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.994232 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.994267 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-config\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:51 crc kubenswrapper[4710]: I1128 17:17:51.994309 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq4nn\" (UniqueName: \"kubernetes.io/projected/0242e508-bdc7-4611-92f2-6df38d51821c-kube-api-access-dq4nn\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.095692 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.095739 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-config\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.095827 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dq4nn\" (UniqueName: \"kubernetes.io/projected/0242e508-bdc7-4611-92f2-6df38d51821c-kube-api-access-dq4nn\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.095874 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.095922 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.097308 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.097335 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.097515 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.097794 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-config\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.098506 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-s66fr" podUID="fdf4441a-0900-45fc-b59a-0e8939d339b3" containerName="dnsmasq-dns" containerID="cri-o://4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d" gracePeriod=10 Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.122830 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dq4nn\" (UniqueName: \"kubernetes.io/projected/0242e508-bdc7-4611-92f2-6df38d51821c-kube-api-access-dq4nn\") pod \"dnsmasq-dns-b8fbc5445-fkpk9\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.234896 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.700578 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.814084 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-nb\") pod \"fdf4441a-0900-45fc-b59a-0e8939d339b3\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.814164 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gssv\" (UniqueName: \"kubernetes.io/projected/fdf4441a-0900-45fc-b59a-0e8939d339b3-kube-api-access-6gssv\") pod \"fdf4441a-0900-45fc-b59a-0e8939d339b3\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.814338 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-config\") pod \"fdf4441a-0900-45fc-b59a-0e8939d339b3\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.814406 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-dns-svc\") pod \"fdf4441a-0900-45fc-b59a-0e8939d339b3\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.814454 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-sb\") pod \"fdf4441a-0900-45fc-b59a-0e8939d339b3\" (UID: \"fdf4441a-0900-45fc-b59a-0e8939d339b3\") " Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.827649 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdf4441a-0900-45fc-b59a-0e8939d339b3-kube-api-access-6gssv" (OuterVolumeSpecName: "kube-api-access-6gssv") pod "fdf4441a-0900-45fc-b59a-0e8939d339b3" (UID: "fdf4441a-0900-45fc-b59a-0e8939d339b3"). InnerVolumeSpecName "kube-api-access-6gssv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.840543 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fkpk9"] Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.876899 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fdf4441a-0900-45fc-b59a-0e8939d339b3" (UID: "fdf4441a-0900-45fc-b59a-0e8939d339b3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.880181 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-config" (OuterVolumeSpecName: "config") pod "fdf4441a-0900-45fc-b59a-0e8939d339b3" (UID: "fdf4441a-0900-45fc-b59a-0e8939d339b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.880506 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fdf4441a-0900-45fc-b59a-0e8939d339b3" (UID: "fdf4441a-0900-45fc-b59a-0e8939d339b3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.902168 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fdf4441a-0900-45fc-b59a-0e8939d339b3" (UID: "fdf4441a-0900-45fc-b59a-0e8939d339b3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.916441 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.916487 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gssv\" (UniqueName: \"kubernetes.io/projected/fdf4441a-0900-45fc-b59a-0e8939d339b3-kube-api-access-6gssv\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.916500 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.916509 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:52 crc kubenswrapper[4710]: I1128 17:17:52.916520 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fdf4441a-0900-45fc-b59a-0e8939d339b3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.026320 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.026832 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf4441a-0900-45fc-b59a-0e8939d339b3" containerName="dnsmasq-dns" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.026848 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf4441a-0900-45fc-b59a-0e8939d339b3" containerName="dnsmasq-dns" Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.026862 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdf4441a-0900-45fc-b59a-0e8939d339b3" containerName="init" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.026872 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdf4441a-0900-45fc-b59a-0e8939d339b3" containerName="init" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.027113 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdf4441a-0900-45fc-b59a-0e8939d339b3" containerName="dnsmasq-dns" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.040811 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.043739 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.043891 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.043930 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.047240 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-fjv9t" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.050383 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.110952 4710 generic.go:334] "Generic (PLEG): container finished" podID="fdf4441a-0900-45fc-b59a-0e8939d339b3" containerID="4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d" exitCode=0 Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.111162 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s66fr" event={"ID":"fdf4441a-0900-45fc-b59a-0e8939d339b3","Type":"ContainerDied","Data":"4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d"} Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.111686 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-s66fr" event={"ID":"fdf4441a-0900-45fc-b59a-0e8939d339b3","Type":"ContainerDied","Data":"e79e7daab2d4679f5c44c704a8556f217db3171723f8a4908b86cfc84e4b8313"} Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.111814 4710 scope.go:117] "RemoveContainer" containerID="4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.111250 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-s66fr" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.118226 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" event={"ID":"0242e508-bdc7-4611-92f2-6df38d51821c","Type":"ContainerStarted","Data":"c9f0c5c5d5028e7766a57c305a4a838c13d7fa9717336163109a605654cf74fb"} Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.122237 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.122344 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz2tc\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-kube-api-access-fz2tc\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.122421 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/96a67841-bed8-4758-a152-31602db98d49-lock\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.122555 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/96a67841-bed8-4758-a152-31602db98d49-cache\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.122594 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.160906 4710 scope.go:117] "RemoveContainer" containerID="d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.206653 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s66fr"] Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.212467 4710 scope.go:117] "RemoveContainer" containerID="4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d" Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.213118 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d\": container with ID starting with 4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d not found: ID does not exist" containerID="4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.213151 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d"} err="failed to get container status \"4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d\": rpc error: code = NotFound desc = could not find container \"4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d\": container with ID starting with 4fc405c670368402bd8d2f955cadc2eb0c13c164555024cb625154886c65716d not found: ID does not exist" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.213334 4710 scope.go:117] "RemoveContainer" containerID="d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f" Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.214573 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f\": container with ID starting with d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f not found: ID does not exist" containerID="d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.214619 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f"} err="failed to get container status \"d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f\": rpc error: code = NotFound desc = could not find container \"d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f\": container with ID starting with d8152c780284c74e980568f79ada03ec3cba91f79688bd603ffce3c9f32b2e4f not found: ID does not exist" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.214620 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-s66fr"] Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.225006 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/96a67841-bed8-4758-a152-31602db98d49-cache\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.225085 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.225202 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.225258 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz2tc\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-kube-api-access-fz2tc\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.225303 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/96a67841-bed8-4758-a152-31602db98d49-lock\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.225741 4710 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.225776 4710 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.225826 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift podName:96a67841-bed8-4758-a152-31602db98d49 nodeName:}" failed. No retries permitted until 2025-11-28 17:17:53.725807109 +0000 UTC m=+1162.984107154 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift") pod "swift-storage-0" (UID: "96a67841-bed8-4758-a152-31602db98d49") : configmap "swift-ring-files" not found Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.226122 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/96a67841-bed8-4758-a152-31602db98d49-lock\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.226445 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.225576 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/96a67841-bed8-4758-a152-31602db98d49-cache\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.275840 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz2tc\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-kube-api-access-fz2tc\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.298074 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.578163 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-vmqrw"] Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.579724 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.582375 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.584830 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.587797 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.589928 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-vmqrw"] Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.634314 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b25a841-414c-47de-95a9-4086d6d5eb9a-etc-swift\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.634382 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-combined-ca-bundle\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.634468 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-swiftconf\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.634509 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-ring-data-devices\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.634555 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-scripts\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.634578 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmzb4\" (UniqueName: \"kubernetes.io/projected/8b25a841-414c-47de-95a9-4086d6d5eb9a-kube-api-access-kmzb4\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.634632 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-dispersionconf\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.645892 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-kmxkk"] Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.647188 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.693188 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-vmqrw"] Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.693514 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-kmzb4 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-vmqrw" podUID="8b25a841-414c-47de-95a9-4086d6d5eb9a" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.700348 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kmxkk"] Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.736990 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b25a841-414c-47de-95a9-4086d6d5eb9a-etc-swift\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737053 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2b3dc001-22a3-4390-8d90-6769b184d2a0-etc-swift\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737078 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-combined-ca-bundle\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737121 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737366 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-scripts\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.737401 4710 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.737435 4710 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:17:53 crc kubenswrapper[4710]: E1128 17:17:53.737491 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift podName:96a67841-bed8-4758-a152-31602db98d49 nodeName:}" failed. No retries permitted until 2025-11-28 17:17:54.737468834 +0000 UTC m=+1163.995768969 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift") pod "swift-storage-0" (UID: "96a67841-bed8-4758-a152-31602db98d49") : configmap "swift-ring-files" not found Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737523 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-dispersionconf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737584 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-combined-ca-bundle\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737653 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-swiftconf\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737685 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b25a841-414c-47de-95a9-4086d6d5eb9a-etc-swift\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737749 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-ring-data-devices\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737860 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-scripts\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737891 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmzb4\" (UniqueName: \"kubernetes.io/projected/8b25a841-414c-47de-95a9-4086d6d5eb9a-kube-api-access-kmzb4\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.737934 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-ring-data-devices\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.738043 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-dispersionconf\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.738079 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5phdf\" (UniqueName: \"kubernetes.io/projected/2b3dc001-22a3-4390-8d90-6769b184d2a0-kube-api-access-5phdf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.738129 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-swiftconf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.738973 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-ring-data-devices\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.739136 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-scripts\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.741945 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-combined-ca-bundle\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.742285 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-dispersionconf\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.745392 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-swiftconf\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.754213 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmzb4\" (UniqueName: \"kubernetes.io/projected/8b25a841-414c-47de-95a9-4086d6d5eb9a-kube-api-access-kmzb4\") pod \"swift-ring-rebalance-vmqrw\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.841381 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2b3dc001-22a3-4390-8d90-6769b184d2a0-etc-swift\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.841776 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-scripts\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.841969 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-dispersionconf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.842785 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-combined-ca-bundle\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.842072 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2b3dc001-22a3-4390-8d90-6769b184d2a0-etc-swift\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.843496 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-ring-data-devices\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.843708 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5phdf\" (UniqueName: \"kubernetes.io/projected/2b3dc001-22a3-4390-8d90-6769b184d2a0-kube-api-access-5phdf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.843912 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-swiftconf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.843753 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-scripts\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.844794 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-ring-data-devices\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.846620 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-combined-ca-bundle\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.849153 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-dispersionconf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.849633 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-swiftconf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.864494 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5phdf\" (UniqueName: \"kubernetes.io/projected/2b3dc001-22a3-4390-8d90-6769b184d2a0-kube-api-access-5phdf\") pod \"swift-ring-rebalance-kmxkk\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:53 crc kubenswrapper[4710]: I1128 17:17:53.967499 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.141190 4710 generic.go:334] "Generic (PLEG): container finished" podID="0242e508-bdc7-4611-92f2-6df38d51821c" containerID="7bb3a1ae4ed009d9bd292647e1e1e68979c272976e21d90e9ed5d2a06b146c09" exitCode=0 Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.141952 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.142632 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" event={"ID":"0242e508-bdc7-4611-92f2-6df38d51821c","Type":"ContainerDied","Data":"7bb3a1ae4ed009d9bd292647e1e1e68979c272976e21d90e9ed5d2a06b146c09"} Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.319033 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:54 crc kubenswrapper[4710]: W1128 17:17:54.455322 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b3dc001_22a3_4390_8d90_6769b184d2a0.slice/crio-3c07f329a1e1a67cdec21c6a2d72391f28531f282edbb51df4817b9c972c7be8 WatchSource:0}: Error finding container 3c07f329a1e1a67cdec21c6a2d72391f28531f282edbb51df4817b9c972c7be8: Status 404 returned error can't find the container with id 3c07f329a1e1a67cdec21c6a2d72391f28531f282edbb51df4817b9c972c7be8 Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.460680 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-kmxkk"] Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.467011 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-swiftconf\") pod \"8b25a841-414c-47de-95a9-4086d6d5eb9a\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.467248 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-combined-ca-bundle\") pod \"8b25a841-414c-47de-95a9-4086d6d5eb9a\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.467301 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-dispersionconf\") pod \"8b25a841-414c-47de-95a9-4086d6d5eb9a\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.467322 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmzb4\" (UniqueName: \"kubernetes.io/projected/8b25a841-414c-47de-95a9-4086d6d5eb9a-kube-api-access-kmzb4\") pod \"8b25a841-414c-47de-95a9-4086d6d5eb9a\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.467355 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-ring-data-devices\") pod \"8b25a841-414c-47de-95a9-4086d6d5eb9a\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.467383 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b25a841-414c-47de-95a9-4086d6d5eb9a-etc-swift\") pod \"8b25a841-414c-47de-95a9-4086d6d5eb9a\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.467421 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-scripts\") pod \"8b25a841-414c-47de-95a9-4086d6d5eb9a\" (UID: \"8b25a841-414c-47de-95a9-4086d6d5eb9a\") " Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.467695 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b25a841-414c-47de-95a9-4086d6d5eb9a-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "8b25a841-414c-47de-95a9-4086d6d5eb9a" (UID: "8b25a841-414c-47de-95a9-4086d6d5eb9a"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.468022 4710 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/8b25a841-414c-47de-95a9-4086d6d5eb9a-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.468111 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-scripts" (OuterVolumeSpecName: "scripts") pod "8b25a841-414c-47de-95a9-4086d6d5eb9a" (UID: "8b25a841-414c-47de-95a9-4086d6d5eb9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.468148 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "8b25a841-414c-47de-95a9-4086d6d5eb9a" (UID: "8b25a841-414c-47de-95a9-4086d6d5eb9a"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.472105 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "8b25a841-414c-47de-95a9-4086d6d5eb9a" (UID: "8b25a841-414c-47de-95a9-4086d6d5eb9a"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.472199 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b25a841-414c-47de-95a9-4086d6d5eb9a-kube-api-access-kmzb4" (OuterVolumeSpecName: "kube-api-access-kmzb4") pod "8b25a841-414c-47de-95a9-4086d6d5eb9a" (UID: "8b25a841-414c-47de-95a9-4086d6d5eb9a"). InnerVolumeSpecName "kube-api-access-kmzb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.472185 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "8b25a841-414c-47de-95a9-4086d6d5eb9a" (UID: "8b25a841-414c-47de-95a9-4086d6d5eb9a"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.473249 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b25a841-414c-47de-95a9-4086d6d5eb9a" (UID: "8b25a841-414c-47de-95a9-4086d6d5eb9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.570030 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.570070 4710 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.570082 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmzb4\" (UniqueName: \"kubernetes.io/projected/8b25a841-414c-47de-95a9-4086d6d5eb9a-kube-api-access-kmzb4\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.570096 4710 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.570108 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b25a841-414c-47de-95a9-4086d6d5eb9a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.570120 4710 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/8b25a841-414c-47de-95a9-4086d6d5eb9a-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 17:17:54 crc kubenswrapper[4710]: I1128 17:17:54.773017 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:54 crc kubenswrapper[4710]: E1128 17:17:54.773343 4710 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:17:54 crc kubenswrapper[4710]: E1128 17:17:54.773407 4710 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:17:54 crc kubenswrapper[4710]: E1128 17:17:54.773514 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift podName:96a67841-bed8-4758-a152-31602db98d49 nodeName:}" failed. No retries permitted until 2025-11-28 17:17:56.773477336 +0000 UTC m=+1166.031777391 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift") pod "swift-storage-0" (UID: "96a67841-bed8-4758-a152-31602db98d49") : configmap "swift-ring-files" not found Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.155518 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdf4441a-0900-45fc-b59a-0e8939d339b3" path="/var/lib/kubelet/pods/fdf4441a-0900-45fc-b59a-0e8939d339b3/volumes" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.163931 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kmxkk" event={"ID":"2b3dc001-22a3-4390-8d90-6769b184d2a0","Type":"ContainerStarted","Data":"3c07f329a1e1a67cdec21c6a2d72391f28531f282edbb51df4817b9c972c7be8"} Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.188585 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-vmqrw" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.189184 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" event={"ID":"0242e508-bdc7-4611-92f2-6df38d51821c","Type":"ContainerStarted","Data":"7d9ecdaf3372577fdecf4e222b5356fdf79070cdb0a3eae03e648bd79e503c11"} Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.189709 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.222286 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jcjn5"] Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.223720 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jcjn5" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.224288 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" podStartSLOduration=4.2242674319999995 podStartE2EDuration="4.224267432s" podCreationTimestamp="2025-11-28 17:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:17:55.221214395 +0000 UTC m=+1164.479514450" watchObservedRunningTime="2025-11-28 17:17:55.224267432 +0000 UTC m=+1164.482567477" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.250219 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jcjn5"] Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.263796 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6571-account-create-update-rg7kx"] Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.265350 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.272239 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.277144 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6571-account-create-update-rg7kx"] Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.283137 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-operator-scripts\") pod \"glance-db-create-jcjn5\" (UID: \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\") " pod="openstack/glance-db-create-jcjn5" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.283268 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ps8t\" (UniqueName: \"kubernetes.io/projected/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-kube-api-access-5ps8t\") pod \"glance-db-create-jcjn5\" (UID: \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\") " pod="openstack/glance-db-create-jcjn5" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.298684 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-vmqrw"] Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.310138 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-vmqrw"] Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.385207 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ps8t\" (UniqueName: \"kubernetes.io/projected/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-kube-api-access-5ps8t\") pod \"glance-db-create-jcjn5\" (UID: \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\") " pod="openstack/glance-db-create-jcjn5" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.385343 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece12b78-9c4f-44aa-bb24-2737fca7003c-operator-scripts\") pod \"glance-6571-account-create-update-rg7kx\" (UID: \"ece12b78-9c4f-44aa-bb24-2737fca7003c\") " pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.385378 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plmbf\" (UniqueName: \"kubernetes.io/projected/ece12b78-9c4f-44aa-bb24-2737fca7003c-kube-api-access-plmbf\") pod \"glance-6571-account-create-update-rg7kx\" (UID: \"ece12b78-9c4f-44aa-bb24-2737fca7003c\") " pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.385437 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-operator-scripts\") pod \"glance-db-create-jcjn5\" (UID: \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\") " pod="openstack/glance-db-create-jcjn5" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.386264 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-operator-scripts\") pod \"glance-db-create-jcjn5\" (UID: \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\") " pod="openstack/glance-db-create-jcjn5" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.404240 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ps8t\" (UniqueName: \"kubernetes.io/projected/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-kube-api-access-5ps8t\") pod \"glance-db-create-jcjn5\" (UID: \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\") " pod="openstack/glance-db-create-jcjn5" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.488363 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece12b78-9c4f-44aa-bb24-2737fca7003c-operator-scripts\") pod \"glance-6571-account-create-update-rg7kx\" (UID: \"ece12b78-9c4f-44aa-bb24-2737fca7003c\") " pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.488444 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plmbf\" (UniqueName: \"kubernetes.io/projected/ece12b78-9c4f-44aa-bb24-2737fca7003c-kube-api-access-plmbf\") pod \"glance-6571-account-create-update-rg7kx\" (UID: \"ece12b78-9c4f-44aa-bb24-2737fca7003c\") " pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.489467 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece12b78-9c4f-44aa-bb24-2737fca7003c-operator-scripts\") pod \"glance-6571-account-create-update-rg7kx\" (UID: \"ece12b78-9c4f-44aa-bb24-2737fca7003c\") " pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.502856 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plmbf\" (UniqueName: \"kubernetes.io/projected/ece12b78-9c4f-44aa-bb24-2737fca7003c-kube-api-access-plmbf\") pod \"glance-6571-account-create-update-rg7kx\" (UID: \"ece12b78-9c4f-44aa-bb24-2737fca7003c\") " pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.558902 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jcjn5" Nov 28 17:17:55 crc kubenswrapper[4710]: I1128 17:17:55.590003 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:17:56 crc kubenswrapper[4710]: I1128 17:17:56.071995 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jcjn5"] Nov 28 17:17:56 crc kubenswrapper[4710]: I1128 17:17:56.229129 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6571-account-create-update-rg7kx"] Nov 28 17:17:56 crc kubenswrapper[4710]: I1128 17:17:56.821893 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:17:56 crc kubenswrapper[4710]: E1128 17:17:56.822079 4710 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:17:56 crc kubenswrapper[4710]: E1128 17:17:56.822098 4710 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:17:56 crc kubenswrapper[4710]: E1128 17:17:56.822150 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift podName:96a67841-bed8-4758-a152-31602db98d49 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:00.822132239 +0000 UTC m=+1170.080432284 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift") pod "swift-storage-0" (UID: "96a67841-bed8-4758-a152-31602db98d49") : configmap "swift-ring-files" not found Nov 28 17:17:56 crc kubenswrapper[4710]: W1128 17:17:56.823616 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podece12b78_9c4f_44aa_bb24_2737fca7003c.slice/crio-d98930991e7060d1e74982e9ce25ab09375da91b3219702e18e6dcf022456b11 WatchSource:0}: Error finding container d98930991e7060d1e74982e9ce25ab09375da91b3219702e18e6dcf022456b11: Status 404 returned error can't find the container with id d98930991e7060d1e74982e9ce25ab09375da91b3219702e18e6dcf022456b11 Nov 28 17:17:56 crc kubenswrapper[4710]: W1128 17:17:56.823851 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafdf0bd2_a972_4148_9e0f_49f5d1f90f1c.slice/crio-5e4d9116f071b597080500f64d5a25a4abf1661af0ba0728940c931279460d7d WatchSource:0}: Error finding container 5e4d9116f071b597080500f64d5a25a4abf1661af0ba0728940c931279460d7d: Status 404 returned error can't find the container with id 5e4d9116f071b597080500f64d5a25a4abf1661af0ba0728940c931279460d7d Nov 28 17:17:57 crc kubenswrapper[4710]: I1128 17:17:57.156611 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b25a841-414c-47de-95a9-4086d6d5eb9a" path="/var/lib/kubelet/pods/8b25a841-414c-47de-95a9-4086d6d5eb9a/volumes" Nov 28 17:17:57 crc kubenswrapper[4710]: I1128 17:17:57.205689 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jcjn5" event={"ID":"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c","Type":"ContainerStarted","Data":"5e4d9116f071b597080500f64d5a25a4abf1661af0ba0728940c931279460d7d"} Nov 28 17:17:57 crc kubenswrapper[4710]: I1128 17:17:57.206984 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6571-account-create-update-rg7kx" event={"ID":"ece12b78-9c4f-44aa-bb24-2737fca7003c","Type":"ContainerStarted","Data":"d98930991e7060d1e74982e9ce25ab09375da91b3219702e18e6dcf022456b11"} Nov 28 17:17:58 crc kubenswrapper[4710]: I1128 17:17:58.219203 4710 generic.go:334] "Generic (PLEG): container finished" podID="f399c745-4f4e-44e8-8813-af3861dc0eb0" containerID="a547221951088401addaed6821940f14517efca1a5c55afed29e17422d05f3b6" exitCode=0 Nov 28 17:17:58 crc kubenswrapper[4710]: I1128 17:17:58.219282 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f399c745-4f4e-44e8-8813-af3861dc0eb0","Type":"ContainerDied","Data":"a547221951088401addaed6821940f14517efca1a5c55afed29e17422d05f3b6"} Nov 28 17:17:58 crc kubenswrapper[4710]: I1128 17:17:58.222138 4710 generic.go:334] "Generic (PLEG): container finished" podID="01f3773a-064e-4241-8327-758541098113" containerID="2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f" exitCode=0 Nov 28 17:17:58 crc kubenswrapper[4710]: I1128 17:17:58.222158 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01f3773a-064e-4241-8327-758541098113","Type":"ContainerDied","Data":"2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f"} Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.232645 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kmxkk" event={"ID":"2b3dc001-22a3-4390-8d90-6769b184d2a0","Type":"ContainerStarted","Data":"35395aad55a106f45d61d6db91a3498f5ea231aa18d6c05975f376b7e5b9cf07"} Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.236421 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f399c745-4f4e-44e8-8813-af3861dc0eb0","Type":"ContainerStarted","Data":"30e9673a2bbd342f419e56170fc3b2ad0e2baead63a1f7877b1373479afe4653"} Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.237252 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.239805 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01f3773a-064e-4241-8327-758541098113","Type":"ContainerStarted","Data":"f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0"} Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.240116 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.241855 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jcjn5" event={"ID":"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c","Type":"ContainerStarted","Data":"bf0dd181cc047d9e06a7b13646c1c9f33ed4b7598cf819c0c23fb318707d6d08"} Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.243781 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6571-account-create-update-rg7kx" event={"ID":"ece12b78-9c4f-44aa-bb24-2737fca7003c","Type":"ContainerStarted","Data":"388cda1b5dde397b07d801e697d061f62f7971e0a9bef69ee6d89a677bc12347"} Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.254353 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-kmxkk" podStartSLOduration=1.8535991649999999 podStartE2EDuration="6.254329147s" podCreationTimestamp="2025-11-28 17:17:53 +0000 UTC" firstStartedPulling="2025-11-28 17:17:54.458320688 +0000 UTC m=+1163.716620733" lastFinishedPulling="2025-11-28 17:17:58.85905066 +0000 UTC m=+1168.117350715" observedRunningTime="2025-11-28 17:17:59.252983455 +0000 UTC m=+1168.511283500" watchObservedRunningTime="2025-11-28 17:17:59.254329147 +0000 UTC m=+1168.512629192" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.285773 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.719461736 podStartE2EDuration="54.285741893s" podCreationTimestamp="2025-11-28 17:17:05 +0000 UTC" firstStartedPulling="2025-11-28 17:17:07.015166464 +0000 UTC m=+1116.273466509" lastFinishedPulling="2025-11-28 17:17:24.581446621 +0000 UTC m=+1133.839746666" observedRunningTime="2025-11-28 17:17:59.278211484 +0000 UTC m=+1168.536511529" watchObservedRunningTime="2025-11-28 17:17:59.285741893 +0000 UTC m=+1168.544041938" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.309576 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.177250631 podStartE2EDuration="54.309558947s" podCreationTimestamp="2025-11-28 17:17:05 +0000 UTC" firstStartedPulling="2025-11-28 17:17:07.500994919 +0000 UTC m=+1116.759294964" lastFinishedPulling="2025-11-28 17:17:24.633303235 +0000 UTC m=+1133.891603280" observedRunningTime="2025-11-28 17:17:59.309244217 +0000 UTC m=+1168.567544262" watchObservedRunningTime="2025-11-28 17:17:59.309558947 +0000 UTC m=+1168.567858992" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.326394 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-6571-account-create-update-rg7kx" podStartSLOduration=4.32637657 podStartE2EDuration="4.32637657s" podCreationTimestamp="2025-11-28 17:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:17:59.320115832 +0000 UTC m=+1168.578415897" watchObservedRunningTime="2025-11-28 17:17:59.32637657 +0000 UTC m=+1168.584676615" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.343408 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-jcjn5" podStartSLOduration=4.343393049 podStartE2EDuration="4.343393049s" podCreationTimestamp="2025-11-28 17:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:17:59.33742391 +0000 UTC m=+1168.595723955" watchObservedRunningTime="2025-11-28 17:17:59.343393049 +0000 UTC m=+1168.601693094" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.513477 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-rx2v2"] Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.515021 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rx2v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.532159 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rx2v2"] Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.578947 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w952d\" (UniqueName: \"kubernetes.io/projected/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-kube-api-access-w952d\") pod \"keystone-db-create-rx2v2\" (UID: \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\") " pod="openstack/keystone-db-create-rx2v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.579133 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-operator-scripts\") pod \"keystone-db-create-rx2v2\" (UID: \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\") " pod="openstack/keystone-db-create-rx2v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.635417 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6b0c-account-create-update-m2272"] Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.637030 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.640095 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.646629 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b0c-account-create-update-m2272"] Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.680529 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-operator-scripts\") pod \"keystone-6b0c-account-create-update-m2272\" (UID: \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\") " pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.680645 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w952d\" (UniqueName: \"kubernetes.io/projected/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-kube-api-access-w952d\") pod \"keystone-db-create-rx2v2\" (UID: \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\") " pod="openstack/keystone-db-create-rx2v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.680711 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-operator-scripts\") pod \"keystone-db-create-rx2v2\" (UID: \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\") " pod="openstack/keystone-db-create-rx2v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.680734 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt9n8\" (UniqueName: \"kubernetes.io/projected/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-kube-api-access-wt9n8\") pod \"keystone-6b0c-account-create-update-m2272\" (UID: \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\") " pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.681847 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-operator-scripts\") pod \"keystone-db-create-rx2v2\" (UID: \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\") " pod="openstack/keystone-db-create-rx2v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.700089 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w952d\" (UniqueName: \"kubernetes.io/projected/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-kube-api-access-w952d\") pod \"keystone-db-create-rx2v2\" (UID: \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\") " pod="openstack/keystone-db-create-rx2v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.782572 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-operator-scripts\") pod \"keystone-6b0c-account-create-update-m2272\" (UID: \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\") " pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.783088 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt9n8\" (UniqueName: \"kubernetes.io/projected/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-kube-api-access-wt9n8\") pod \"keystone-6b0c-account-create-update-m2272\" (UID: \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\") " pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.783894 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-operator-scripts\") pod \"keystone-6b0c-account-create-update-m2272\" (UID: \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\") " pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.810457 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt9n8\" (UniqueName: \"kubernetes.io/projected/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-kube-api-access-wt9n8\") pod \"keystone-6b0c-account-create-update-m2272\" (UID: \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\") " pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.845624 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-sm9v2"] Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.847388 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sm9v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.848547 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rx2v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.865211 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sm9v2"] Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.933661 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-c6a1-account-create-update-6jclb"] Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.934949 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.940475 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.955036 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.956802 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c6a1-account-create-update-6jclb"] Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.987405 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e97a646-985c-4a67-8cb6-c817e73c30e2-operator-scripts\") pod \"placement-db-create-sm9v2\" (UID: \"9e97a646-985c-4a67-8cb6-c817e73c30e2\") " pod="openstack/placement-db-create-sm9v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.987451 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43f5116-81fd-41d7-8509-1ff325cce28a-operator-scripts\") pod \"placement-c6a1-account-create-update-6jclb\" (UID: \"f43f5116-81fd-41d7-8509-1ff325cce28a\") " pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.987531 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd6v4\" (UniqueName: \"kubernetes.io/projected/9e97a646-985c-4a67-8cb6-c817e73c30e2-kube-api-access-wd6v4\") pod \"placement-db-create-sm9v2\" (UID: \"9e97a646-985c-4a67-8cb6-c817e73c30e2\") " pod="openstack/placement-db-create-sm9v2" Nov 28 17:17:59 crc kubenswrapper[4710]: I1128 17:17:59.987576 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ts2s\" (UniqueName: \"kubernetes.io/projected/f43f5116-81fd-41d7-8509-1ff325cce28a-kube-api-access-4ts2s\") pod \"placement-c6a1-account-create-update-6jclb\" (UID: \"f43f5116-81fd-41d7-8509-1ff325cce28a\") " pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.089982 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e97a646-985c-4a67-8cb6-c817e73c30e2-operator-scripts\") pod \"placement-db-create-sm9v2\" (UID: \"9e97a646-985c-4a67-8cb6-c817e73c30e2\") " pod="openstack/placement-db-create-sm9v2" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.090536 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43f5116-81fd-41d7-8509-1ff325cce28a-operator-scripts\") pod \"placement-c6a1-account-create-update-6jclb\" (UID: \"f43f5116-81fd-41d7-8509-1ff325cce28a\") " pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.090618 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd6v4\" (UniqueName: \"kubernetes.io/projected/9e97a646-985c-4a67-8cb6-c817e73c30e2-kube-api-access-wd6v4\") pod \"placement-db-create-sm9v2\" (UID: \"9e97a646-985c-4a67-8cb6-c817e73c30e2\") " pod="openstack/placement-db-create-sm9v2" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.090674 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ts2s\" (UniqueName: \"kubernetes.io/projected/f43f5116-81fd-41d7-8509-1ff325cce28a-kube-api-access-4ts2s\") pod \"placement-c6a1-account-create-update-6jclb\" (UID: \"f43f5116-81fd-41d7-8509-1ff325cce28a\") " pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.091563 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43f5116-81fd-41d7-8509-1ff325cce28a-operator-scripts\") pod \"placement-c6a1-account-create-update-6jclb\" (UID: \"f43f5116-81fd-41d7-8509-1ff325cce28a\") " pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.092126 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e97a646-985c-4a67-8cb6-c817e73c30e2-operator-scripts\") pod \"placement-db-create-sm9v2\" (UID: \"9e97a646-985c-4a67-8cb6-c817e73c30e2\") " pod="openstack/placement-db-create-sm9v2" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.111557 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd6v4\" (UniqueName: \"kubernetes.io/projected/9e97a646-985c-4a67-8cb6-c817e73c30e2-kube-api-access-wd6v4\") pod \"placement-db-create-sm9v2\" (UID: \"9e97a646-985c-4a67-8cb6-c817e73c30e2\") " pod="openstack/placement-db-create-sm9v2" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.115393 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ts2s\" (UniqueName: \"kubernetes.io/projected/f43f5116-81fd-41d7-8509-1ff325cce28a-kube-api-access-4ts2s\") pod \"placement-c6a1-account-create-update-6jclb\" (UID: \"f43f5116-81fd-41d7-8509-1ff325cce28a\") " pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.166614 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sm9v2" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.277383 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.287690 4710 generic.go:334] "Generic (PLEG): container finished" podID="afdf0bd2-a972-4148-9e0f-49f5d1f90f1c" containerID="bf0dd181cc047d9e06a7b13646c1c9f33ed4b7598cf819c0c23fb318707d6d08" exitCode=0 Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.288225 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jcjn5" event={"ID":"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c","Type":"ContainerDied","Data":"bf0dd181cc047d9e06a7b13646c1c9f33ed4b7598cf819c0c23fb318707d6d08"} Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.305176 4710 generic.go:334] "Generic (PLEG): container finished" podID="ece12b78-9c4f-44aa-bb24-2737fca7003c" containerID="388cda1b5dde397b07d801e697d061f62f7971e0a9bef69ee6d89a677bc12347" exitCode=0 Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.305244 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6571-account-create-update-rg7kx" event={"ID":"ece12b78-9c4f-44aa-bb24-2737fca7003c","Type":"ContainerDied","Data":"388cda1b5dde397b07d801e697d061f62f7971e0a9bef69ee6d89a677bc12347"} Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.480801 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-rx2v2"] Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.621645 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6b0c-account-create-update-m2272"] Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.769611 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-sm9v2"] Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.891475 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-c6a1-account-create-update-6jclb"] Nov 28 17:18:00 crc kubenswrapper[4710]: I1128 17:18:00.920821 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:18:00 crc kubenswrapper[4710]: E1128 17:18:00.921127 4710 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 28 17:18:00 crc kubenswrapper[4710]: E1128 17:18:00.921142 4710 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 28 17:18:00 crc kubenswrapper[4710]: E1128 17:18:00.921188 4710 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift podName:96a67841-bed8-4758-a152-31602db98d49 nodeName:}" failed. No retries permitted until 2025-11-28 17:18:08.921173051 +0000 UTC m=+1178.179473096 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift") pod "swift-storage-0" (UID: "96a67841-bed8-4758-a152-31602db98d49") : configmap "swift-ring-files" not found Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.319003 4710 generic.go:334] "Generic (PLEG): container finished" podID="8ad53638-4b25-4cd6-bbd3-dcb7e577467e" containerID="d91b710966156edb3b1ee13fae8606e3ed707217b28cf5729f4f5e6259f2a5e0" exitCode=0 Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.319046 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rx2v2" event={"ID":"8ad53638-4b25-4cd6-bbd3-dcb7e577467e","Type":"ContainerDied","Data":"d91b710966156edb3b1ee13fae8606e3ed707217b28cf5729f4f5e6259f2a5e0"} Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.319108 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rx2v2" event={"ID":"8ad53638-4b25-4cd6-bbd3-dcb7e577467e","Type":"ContainerStarted","Data":"a721a6dd0bb160d1078fe5bd812f902afe2e4604040f0701e524ca6101e139b2"} Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.321024 4710 generic.go:334] "Generic (PLEG): container finished" podID="9e97a646-985c-4a67-8cb6-c817e73c30e2" containerID="9d0d476635d4b4b703a7830df150d8bdfd482008b14617d2c62e44618860b199" exitCode=0 Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.321106 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sm9v2" event={"ID":"9e97a646-985c-4a67-8cb6-c817e73c30e2","Type":"ContainerDied","Data":"9d0d476635d4b4b703a7830df150d8bdfd482008b14617d2c62e44618860b199"} Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.321137 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sm9v2" event={"ID":"9e97a646-985c-4a67-8cb6-c817e73c30e2","Type":"ContainerStarted","Data":"19b1e92461670816ad77b3f928f366b63cbb584d98b8c1ca28e1f74414beb929"} Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.323210 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c6a1-account-create-update-6jclb" event={"ID":"f43f5116-81fd-41d7-8509-1ff325cce28a","Type":"ContainerStarted","Data":"86ffcc08560ffa52ab084f23dd09bbff2bb05a822b4ddda4323f5971c78d2911"} Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.323247 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c6a1-account-create-update-6jclb" event={"ID":"f43f5116-81fd-41d7-8509-1ff325cce28a","Type":"ContainerStarted","Data":"71fe88d44627564ebd5067850207282c417e14cb609cb308408306616c319291"} Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.325566 4710 generic.go:334] "Generic (PLEG): container finished" podID="1053ff5c-8aab-40a1-8a79-6f85ab9a2be5" containerID="294b0a45e9b55f2afe78da566f64370a5e943eb127ad0ae9a3e7939ee24d4927" exitCode=0 Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.325680 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b0c-account-create-update-m2272" event={"ID":"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5","Type":"ContainerDied","Data":"294b0a45e9b55f2afe78da566f64370a5e943eb127ad0ae9a3e7939ee24d4927"} Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.325785 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b0c-account-create-update-m2272" event={"ID":"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5","Type":"ContainerStarted","Data":"06abc172d3b4238bf83c5ce0cbcf06396a4866d7b72027324132e174e49a2e88"} Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.386640 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-c6a1-account-create-update-6jclb" podStartSLOduration=2.3866164899999998 podStartE2EDuration="2.38661649s" podCreationTimestamp="2025-11-28 17:17:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:18:01.378313807 +0000 UTC m=+1170.636613852" watchObservedRunningTime="2025-11-28 17:18:01.38661649 +0000 UTC m=+1170.644916535" Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.822372 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jcjn5" Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.949478 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ps8t\" (UniqueName: \"kubernetes.io/projected/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-kube-api-access-5ps8t\") pod \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\" (UID: \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\") " Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.949700 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-operator-scripts\") pod \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\" (UID: \"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c\") " Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.951078 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afdf0bd2-a972-4148-9e0f-49f5d1f90f1c" (UID: "afdf0bd2-a972-4148-9e0f-49f5d1f90f1c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.980990 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-kube-api-access-5ps8t" (OuterVolumeSpecName: "kube-api-access-5ps8t") pod "afdf0bd2-a972-4148-9e0f-49f5d1f90f1c" (UID: "afdf0bd2-a972-4148-9e0f-49f5d1f90f1c"). InnerVolumeSpecName "kube-api-access-5ps8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:01 crc kubenswrapper[4710]: I1128 17:18:01.997099 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.054499 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ps8t\" (UniqueName: \"kubernetes.io/projected/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-kube-api-access-5ps8t\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.054539 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.159435 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plmbf\" (UniqueName: \"kubernetes.io/projected/ece12b78-9c4f-44aa-bb24-2737fca7003c-kube-api-access-plmbf\") pod \"ece12b78-9c4f-44aa-bb24-2737fca7003c\" (UID: \"ece12b78-9c4f-44aa-bb24-2737fca7003c\") " Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.159514 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece12b78-9c4f-44aa-bb24-2737fca7003c-operator-scripts\") pod \"ece12b78-9c4f-44aa-bb24-2737fca7003c\" (UID: \"ece12b78-9c4f-44aa-bb24-2737fca7003c\") " Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.160500 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ece12b78-9c4f-44aa-bb24-2737fca7003c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ece12b78-9c4f-44aa-bb24-2737fca7003c" (UID: "ece12b78-9c4f-44aa-bb24-2737fca7003c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.196029 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ece12b78-9c4f-44aa-bb24-2737fca7003c-kube-api-access-plmbf" (OuterVolumeSpecName: "kube-api-access-plmbf") pod "ece12b78-9c4f-44aa-bb24-2737fca7003c" (UID: "ece12b78-9c4f-44aa-bb24-2737fca7003c"). InnerVolumeSpecName "kube-api-access-plmbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.235935 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.263583 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plmbf\" (UniqueName: \"kubernetes.io/projected/ece12b78-9c4f-44aa-bb24-2737fca7003c-kube-api-access-plmbf\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.263616 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ece12b78-9c4f-44aa-bb24-2737fca7003c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.298450 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvmwp"] Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.298899 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" podUID="735e6f86-ee65-44b8-b685-aa3cf331c533" containerName="dnsmasq-dns" containerID="cri-o://477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7" gracePeriod=10 Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.315000 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.357187 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6571-account-create-update-rg7kx" event={"ID":"ece12b78-9c4f-44aa-bb24-2737fca7003c","Type":"ContainerDied","Data":"d98930991e7060d1e74982e9ce25ab09375da91b3219702e18e6dcf022456b11"} Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.357450 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6571-account-create-update-rg7kx" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.357473 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d98930991e7060d1e74982e9ce25ab09375da91b3219702e18e6dcf022456b11" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.382460 4710 generic.go:334] "Generic (PLEG): container finished" podID="f43f5116-81fd-41d7-8509-1ff325cce28a" containerID="86ffcc08560ffa52ab084f23dd09bbff2bb05a822b4ddda4323f5971c78d2911" exitCode=0 Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.382593 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c6a1-account-create-update-6jclb" event={"ID":"f43f5116-81fd-41d7-8509-1ff325cce28a","Type":"ContainerDied","Data":"86ffcc08560ffa52ab084f23dd09bbff2bb05a822b4ddda4323f5971c78d2911"} Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.391151 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jcjn5" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.391251 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jcjn5" event={"ID":"afdf0bd2-a972-4148-9e0f-49f5d1f90f1c","Type":"ContainerDied","Data":"5e4d9116f071b597080500f64d5a25a4abf1661af0ba0728940c931279460d7d"} Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.391297 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e4d9116f071b597080500f64d5a25a4abf1661af0ba0728940c931279460d7d" Nov 28 17:18:02 crc kubenswrapper[4710]: I1128 17:18:02.927503 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.008407 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-operator-scripts\") pod \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\" (UID: \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.008692 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt9n8\" (UniqueName: \"kubernetes.io/projected/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-kube-api-access-wt9n8\") pod \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\" (UID: \"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.009327 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1053ff5c-8aab-40a1-8a79-6f85ab9a2be5" (UID: "1053ff5c-8aab-40a1-8a79-6f85ab9a2be5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.030033 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-kube-api-access-wt9n8" (OuterVolumeSpecName: "kube-api-access-wt9n8") pod "1053ff5c-8aab-40a1-8a79-6f85ab9a2be5" (UID: "1053ff5c-8aab-40a1-8a79-6f85ab9a2be5"). InnerVolumeSpecName "kube-api-access-wt9n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.113431 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.113628 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt9n8\" (UniqueName: \"kubernetes.io/projected/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5-kube-api-access-wt9n8\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.151723 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.167067 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sm9v2" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.171535 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rx2v2" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.316497 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-config\") pod \"735e6f86-ee65-44b8-b685-aa3cf331c533\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.317326 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5qvw\" (UniqueName: \"kubernetes.io/projected/735e6f86-ee65-44b8-b685-aa3cf331c533-kube-api-access-s5qvw\") pod \"735e6f86-ee65-44b8-b685-aa3cf331c533\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.317498 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w952d\" (UniqueName: \"kubernetes.io/projected/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-kube-api-access-w952d\") pod \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\" (UID: \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.317733 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-dns-svc\") pod \"735e6f86-ee65-44b8-b685-aa3cf331c533\" (UID: \"735e6f86-ee65-44b8-b685-aa3cf331c533\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.317890 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wd6v4\" (UniqueName: \"kubernetes.io/projected/9e97a646-985c-4a67-8cb6-c817e73c30e2-kube-api-access-wd6v4\") pod \"9e97a646-985c-4a67-8cb6-c817e73c30e2\" (UID: \"9e97a646-985c-4a67-8cb6-c817e73c30e2\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.318166 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e97a646-985c-4a67-8cb6-c817e73c30e2-operator-scripts\") pod \"9e97a646-985c-4a67-8cb6-c817e73c30e2\" (UID: \"9e97a646-985c-4a67-8cb6-c817e73c30e2\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.318302 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-operator-scripts\") pod \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\" (UID: \"8ad53638-4b25-4cd6-bbd3-dcb7e577467e\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.319066 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8ad53638-4b25-4cd6-bbd3-dcb7e577467e" (UID: "8ad53638-4b25-4cd6-bbd3-dcb7e577467e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.319064 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e97a646-985c-4a67-8cb6-c817e73c30e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e97a646-985c-4a67-8cb6-c817e73c30e2" (UID: "9e97a646-985c-4a67-8cb6-c817e73c30e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.319954 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e97a646-985c-4a67-8cb6-c817e73c30e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.320298 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.321252 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735e6f86-ee65-44b8-b685-aa3cf331c533-kube-api-access-s5qvw" (OuterVolumeSpecName: "kube-api-access-s5qvw") pod "735e6f86-ee65-44b8-b685-aa3cf331c533" (UID: "735e6f86-ee65-44b8-b685-aa3cf331c533"). InnerVolumeSpecName "kube-api-access-s5qvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.325113 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-kube-api-access-w952d" (OuterVolumeSpecName: "kube-api-access-w952d") pod "8ad53638-4b25-4cd6-bbd3-dcb7e577467e" (UID: "8ad53638-4b25-4cd6-bbd3-dcb7e577467e"). InnerVolumeSpecName "kube-api-access-w952d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.326364 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e97a646-985c-4a67-8cb6-c817e73c30e2-kube-api-access-wd6v4" (OuterVolumeSpecName: "kube-api-access-wd6v4") pod "9e97a646-985c-4a67-8cb6-c817e73c30e2" (UID: "9e97a646-985c-4a67-8cb6-c817e73c30e2"). InnerVolumeSpecName "kube-api-access-wd6v4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.379502 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-config" (OuterVolumeSpecName: "config") pod "735e6f86-ee65-44b8-b685-aa3cf331c533" (UID: "735e6f86-ee65-44b8-b685-aa3cf331c533"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.387494 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "735e6f86-ee65-44b8-b685-aa3cf331c533" (UID: "735e6f86-ee65-44b8-b685-aa3cf331c533"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.403665 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-rx2v2" event={"ID":"8ad53638-4b25-4cd6-bbd3-dcb7e577467e","Type":"ContainerDied","Data":"a721a6dd0bb160d1078fe5bd812f902afe2e4604040f0701e524ca6101e139b2"} Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.403717 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a721a6dd0bb160d1078fe5bd812f902afe2e4604040f0701e524ca6101e139b2" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.403803 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-rx2v2" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.406811 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-sm9v2" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.406805 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-sm9v2" event={"ID":"9e97a646-985c-4a67-8cb6-c817e73c30e2","Type":"ContainerDied","Data":"19b1e92461670816ad77b3f928f366b63cbb584d98b8c1ca28e1f74414beb929"} Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.407000 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b1e92461670816ad77b3f928f366b63cbb584d98b8c1ca28e1f74414beb929" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.408880 4710 generic.go:334] "Generic (PLEG): container finished" podID="735e6f86-ee65-44b8-b685-aa3cf331c533" containerID="477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7" exitCode=0 Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.408964 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" event={"ID":"735e6f86-ee65-44b8-b685-aa3cf331c533","Type":"ContainerDied","Data":"477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7"} Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.408994 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" event={"ID":"735e6f86-ee65-44b8-b685-aa3cf331c533","Type":"ContainerDied","Data":"4c2ca6eae6b067dc1c7e531f39e84e798e61740e562d62df42eebab0a0f777ac"} Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.408986 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jvmwp" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.409013 4710 scope.go:117] "RemoveContainer" containerID="477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.412357 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6b0c-account-create-update-m2272" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.413366 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6b0c-account-create-update-m2272" event={"ID":"1053ff5c-8aab-40a1-8a79-6f85ab9a2be5","Type":"ContainerDied","Data":"06abc172d3b4238bf83c5ce0cbcf06396a4866d7b72027324132e174e49a2e88"} Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.413408 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06abc172d3b4238bf83c5ce0cbcf06396a4866d7b72027324132e174e49a2e88" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.423749 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5qvw\" (UniqueName: \"kubernetes.io/projected/735e6f86-ee65-44b8-b685-aa3cf331c533-kube-api-access-s5qvw\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.423896 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w952d\" (UniqueName: \"kubernetes.io/projected/8ad53638-4b25-4cd6-bbd3-dcb7e577467e-kube-api-access-w952d\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.423909 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.423921 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wd6v4\" (UniqueName: \"kubernetes.io/projected/9e97a646-985c-4a67-8cb6-c817e73c30e2-kube-api-access-wd6v4\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.423932 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/735e6f86-ee65-44b8-b685-aa3cf331c533-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.447913 4710 scope.go:117] "RemoveContainer" containerID="1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.462799 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvmwp"] Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.471799 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jvmwp"] Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.486947 4710 scope.go:117] "RemoveContainer" containerID="477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7" Nov 28 17:18:03 crc kubenswrapper[4710]: E1128 17:18:03.487783 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7\": container with ID starting with 477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7 not found: ID does not exist" containerID="477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.487856 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7"} err="failed to get container status \"477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7\": rpc error: code = NotFound desc = could not find container \"477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7\": container with ID starting with 477b5bae66fb802ef8ab23ebb4a135280fe139f7b1fb7f2d1f31ecd1fd5fcbe7 not found: ID does not exist" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.487901 4710 scope.go:117] "RemoveContainer" containerID="1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f" Nov 28 17:18:03 crc kubenswrapper[4710]: E1128 17:18:03.488443 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f\": container with ID starting with 1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f not found: ID does not exist" containerID="1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.488481 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f"} err="failed to get container status \"1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f\": rpc error: code = NotFound desc = could not find container \"1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f\": container with ID starting with 1535d6b1690268da76a7cc95d46716e92118339971bec44796fcf99bc16aa76f not found: ID does not exist" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.721540 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.831323 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ts2s\" (UniqueName: \"kubernetes.io/projected/f43f5116-81fd-41d7-8509-1ff325cce28a-kube-api-access-4ts2s\") pod \"f43f5116-81fd-41d7-8509-1ff325cce28a\" (UID: \"f43f5116-81fd-41d7-8509-1ff325cce28a\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.831469 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43f5116-81fd-41d7-8509-1ff325cce28a-operator-scripts\") pod \"f43f5116-81fd-41d7-8509-1ff325cce28a\" (UID: \"f43f5116-81fd-41d7-8509-1ff325cce28a\") " Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.832833 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f43f5116-81fd-41d7-8509-1ff325cce28a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f43f5116-81fd-41d7-8509-1ff325cce28a" (UID: "f43f5116-81fd-41d7-8509-1ff325cce28a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.838119 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f43f5116-81fd-41d7-8509-1ff325cce28a-kube-api-access-4ts2s" (OuterVolumeSpecName: "kube-api-access-4ts2s") pod "f43f5116-81fd-41d7-8509-1ff325cce28a" (UID: "f43f5116-81fd-41d7-8509-1ff325cce28a"). InnerVolumeSpecName "kube-api-access-4ts2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.934050 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ts2s\" (UniqueName: \"kubernetes.io/projected/f43f5116-81fd-41d7-8509-1ff325cce28a-kube-api-access-4ts2s\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:03 crc kubenswrapper[4710]: I1128 17:18:03.934099 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f43f5116-81fd-41d7-8509-1ff325cce28a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:04 crc kubenswrapper[4710]: I1128 17:18:04.421781 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-c6a1-account-create-update-6jclb" event={"ID":"f43f5116-81fd-41d7-8509-1ff325cce28a","Type":"ContainerDied","Data":"71fe88d44627564ebd5067850207282c417e14cb609cb308408306616c319291"} Nov 28 17:18:04 crc kubenswrapper[4710]: I1128 17:18:04.421822 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71fe88d44627564ebd5067850207282c417e14cb609cb308408306616c319291" Nov 28 17:18:04 crc kubenswrapper[4710]: I1128 17:18:04.421878 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-c6a1-account-create-update-6jclb" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.156016 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735e6f86-ee65-44b8-b685-aa3cf331c533" path="/var/lib/kubelet/pods/735e6f86-ee65-44b8-b685-aa3cf331c533/volumes" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.555299 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xw8td"] Nov 28 17:18:05 crc kubenswrapper[4710]: E1128 17:18:05.556071 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ad53638-4b25-4cd6-bbd3-dcb7e577467e" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556092 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ad53638-4b25-4cd6-bbd3-dcb7e577467e" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: E1128 17:18:05.556114 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afdf0bd2-a972-4148-9e0f-49f5d1f90f1c" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556122 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="afdf0bd2-a972-4148-9e0f-49f5d1f90f1c" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: E1128 17:18:05.556141 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e6f86-ee65-44b8-b685-aa3cf331c533" containerName="dnsmasq-dns" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556152 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e6f86-ee65-44b8-b685-aa3cf331c533" containerName="dnsmasq-dns" Nov 28 17:18:05 crc kubenswrapper[4710]: E1128 17:18:05.556167 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735e6f86-ee65-44b8-b685-aa3cf331c533" containerName="init" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556177 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="735e6f86-ee65-44b8-b685-aa3cf331c533" containerName="init" Nov 28 17:18:05 crc kubenswrapper[4710]: E1128 17:18:05.556192 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ece12b78-9c4f-44aa-bb24-2737fca7003c" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556199 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ece12b78-9c4f-44aa-bb24-2737fca7003c" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: E1128 17:18:05.556212 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1053ff5c-8aab-40a1-8a79-6f85ab9a2be5" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556220 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1053ff5c-8aab-40a1-8a79-6f85ab9a2be5" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: E1128 17:18:05.556249 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f43f5116-81fd-41d7-8509-1ff325cce28a" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556256 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f43f5116-81fd-41d7-8509-1ff325cce28a" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: E1128 17:18:05.556269 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e97a646-985c-4a67-8cb6-c817e73c30e2" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556276 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e97a646-985c-4a67-8cb6-c817e73c30e2" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556551 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="735e6f86-ee65-44b8-b685-aa3cf331c533" containerName="dnsmasq-dns" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556572 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ece12b78-9c4f-44aa-bb24-2737fca7003c" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556585 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f43f5116-81fd-41d7-8509-1ff325cce28a" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556601 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="afdf0bd2-a972-4148-9e0f-49f5d1f90f1c" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556614 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e97a646-985c-4a67-8cb6-c817e73c30e2" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556627 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ad53638-4b25-4cd6-bbd3-dcb7e577467e" containerName="mariadb-database-create" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.556639 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="1053ff5c-8aab-40a1-8a79-6f85ab9a2be5" containerName="mariadb-account-create-update" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.557818 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.560340 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xds75" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.560352 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.566747 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xw8td"] Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.686962 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-combined-ca-bundle\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.687045 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-db-sync-config-data\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.687110 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-config-data\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.687207 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59bts\" (UniqueName: \"kubernetes.io/projected/a3835d37-f072-4310-a667-a7f398e80ab1-kube-api-access-59bts\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.789114 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-combined-ca-bundle\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.789209 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-db-sync-config-data\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.789273 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-config-data\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.789301 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59bts\" (UniqueName: \"kubernetes.io/projected/a3835d37-f072-4310-a667-a7f398e80ab1-kube-api-access-59bts\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.794510 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-db-sync-config-data\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.794569 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-combined-ca-bundle\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.795321 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-config-data\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.808729 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59bts\" (UniqueName: \"kubernetes.io/projected/a3835d37-f072-4310-a667-a7f398e80ab1-kube-api-access-59bts\") pod \"glance-db-sync-xw8td\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:05 crc kubenswrapper[4710]: I1128 17:18:05.912035 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xw8td" Nov 28 17:18:08 crc kubenswrapper[4710]: I1128 17:18:07.433868 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xw8td"] Nov 28 17:18:08 crc kubenswrapper[4710]: W1128 17:18:07.436594 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3835d37_f072_4310_a667_a7f398e80ab1.slice/crio-ee851e596912c0e0ce6c40356df667a32a06376b24c10438ef2ac18e415270b4 WatchSource:0}: Error finding container ee851e596912c0e0ce6c40356df667a32a06376b24c10438ef2ac18e415270b4: Status 404 returned error can't find the container with id ee851e596912c0e0ce6c40356df667a32a06376b24c10438ef2ac18e415270b4 Nov 28 17:18:08 crc kubenswrapper[4710]: I1128 17:18:07.471098 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xw8td" event={"ID":"a3835d37-f072-4310-a667-a7f398e80ab1","Type":"ContainerStarted","Data":"ee851e596912c0e0ce6c40356df667a32a06376b24c10438ef2ac18e415270b4"} Nov 28 17:18:08 crc kubenswrapper[4710]: I1128 17:18:08.486512 4710 generic.go:334] "Generic (PLEG): container finished" podID="2b3dc001-22a3-4390-8d90-6769b184d2a0" containerID="35395aad55a106f45d61d6db91a3498f5ea231aa18d6c05975f376b7e5b9cf07" exitCode=0 Nov 28 17:18:08 crc kubenswrapper[4710]: I1128 17:18:08.486939 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kmxkk" event={"ID":"2b3dc001-22a3-4390-8d90-6769b184d2a0","Type":"ContainerDied","Data":"35395aad55a106f45d61d6db91a3498f5ea231aa18d6c05975f376b7e5b9cf07"} Nov 28 17:18:08 crc kubenswrapper[4710]: I1128 17:18:08.944843 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:18:08 crc kubenswrapper[4710]: I1128 17:18:08.953198 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/96a67841-bed8-4758-a152-31602db98d49-etc-swift\") pod \"swift-storage-0\" (UID: \"96a67841-bed8-4758-a152-31602db98d49\") " pod="openstack/swift-storage-0" Nov 28 17:18:08 crc kubenswrapper[4710]: I1128 17:18:08.962823 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.568592 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 28 17:18:09 crc kubenswrapper[4710]: W1128 17:18:09.578688 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96a67841_bed8_4758_a152_31602db98d49.slice/crio-c0842b60eda28c3963b5914dd3cfa04ac8d61bff1757b01afd71e12cb1d15213 WatchSource:0}: Error finding container c0842b60eda28c3963b5914dd3cfa04ac8d61bff1757b01afd71e12cb1d15213: Status 404 returned error can't find the container with id c0842b60eda28c3963b5914dd3cfa04ac8d61bff1757b01afd71e12cb1d15213 Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.865130 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.961890 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5phdf\" (UniqueName: \"kubernetes.io/projected/2b3dc001-22a3-4390-8d90-6769b184d2a0-kube-api-access-5phdf\") pod \"2b3dc001-22a3-4390-8d90-6769b184d2a0\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.961947 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-swiftconf\") pod \"2b3dc001-22a3-4390-8d90-6769b184d2a0\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.962028 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-ring-data-devices\") pod \"2b3dc001-22a3-4390-8d90-6769b184d2a0\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.962272 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2b3dc001-22a3-4390-8d90-6769b184d2a0-etc-swift\") pod \"2b3dc001-22a3-4390-8d90-6769b184d2a0\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.962306 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-scripts\") pod \"2b3dc001-22a3-4390-8d90-6769b184d2a0\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.962343 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-combined-ca-bundle\") pod \"2b3dc001-22a3-4390-8d90-6769b184d2a0\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.962374 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-dispersionconf\") pod \"2b3dc001-22a3-4390-8d90-6769b184d2a0\" (UID: \"2b3dc001-22a3-4390-8d90-6769b184d2a0\") " Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.964468 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "2b3dc001-22a3-4390-8d90-6769b184d2a0" (UID: "2b3dc001-22a3-4390-8d90-6769b184d2a0"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.964745 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b3dc001-22a3-4390-8d90-6769b184d2a0-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "2b3dc001-22a3-4390-8d90-6769b184d2a0" (UID: "2b3dc001-22a3-4390-8d90-6769b184d2a0"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.975488 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b3dc001-22a3-4390-8d90-6769b184d2a0-kube-api-access-5phdf" (OuterVolumeSpecName: "kube-api-access-5phdf") pod "2b3dc001-22a3-4390-8d90-6769b184d2a0" (UID: "2b3dc001-22a3-4390-8d90-6769b184d2a0"). InnerVolumeSpecName "kube-api-access-5phdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.975701 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "2b3dc001-22a3-4390-8d90-6769b184d2a0" (UID: "2b3dc001-22a3-4390-8d90-6769b184d2a0"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.989105 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-scripts" (OuterVolumeSpecName: "scripts") pod "2b3dc001-22a3-4390-8d90-6769b184d2a0" (UID: "2b3dc001-22a3-4390-8d90-6769b184d2a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:09 crc kubenswrapper[4710]: I1128 17:18:09.998938 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "2b3dc001-22a3-4390-8d90-6769b184d2a0" (UID: "2b3dc001-22a3-4390-8d90-6769b184d2a0"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.012534 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b3dc001-22a3-4390-8d90-6769b184d2a0" (UID: "2b3dc001-22a3-4390-8d90-6769b184d2a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.064902 4710 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/2b3dc001-22a3-4390-8d90-6769b184d2a0-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.064940 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.064952 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.064965 4710 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.064976 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5phdf\" (UniqueName: \"kubernetes.io/projected/2b3dc001-22a3-4390-8d90-6769b184d2a0-kube-api-access-5phdf\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.064988 4710 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/2b3dc001-22a3-4390-8d90-6769b184d2a0-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.064996 4710 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/2b3dc001-22a3-4390-8d90-6769b184d2a0-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.511970 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-kmxkk" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.512010 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-kmxkk" event={"ID":"2b3dc001-22a3-4390-8d90-6769b184d2a0","Type":"ContainerDied","Data":"3c07f329a1e1a67cdec21c6a2d72391f28531f282edbb51df4817b9c972c7be8"} Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.512055 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c07f329a1e1a67cdec21c6a2d72391f28531f282edbb51df4817b9c972c7be8" Nov 28 17:18:10 crc kubenswrapper[4710]: I1128 17:18:10.515937 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"c0842b60eda28c3963b5914dd3cfa04ac8d61bff1757b01afd71e12cb1d15213"} Nov 28 17:18:11 crc kubenswrapper[4710]: I1128 17:18:11.469554 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4h2ch" podUID="c9a14e8a-2aba-4827-8ff4-48858bec6075" containerName="ovn-controller" probeResult="failure" output=< Nov 28 17:18:11 crc kubenswrapper[4710]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 17:18:11 crc kubenswrapper[4710]: > Nov 28 17:18:11 crc kubenswrapper[4710]: I1128 17:18:11.489495 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:18:12 crc kubenswrapper[4710]: I1128 17:18:12.538867 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"20deb3190fcb56b59c3bc0f90ab7e2881f31b29ff9ca043ff7aba5ec1692861f"} Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.344416 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.344850 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.344903 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.345662 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb26b81e49ab86b80e712b9b1ccbaa329c394a8a23985c1f1e0d00b07d836649"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.345726 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://fb26b81e49ab86b80e712b9b1ccbaa329c394a8a23985c1f1e0d00b07d836649" gracePeriod=600 Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.551461 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="fb26b81e49ab86b80e712b9b1ccbaa329c394a8a23985c1f1e0d00b07d836649" exitCode=0 Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.551532 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"fb26b81e49ab86b80e712b9b1ccbaa329c394a8a23985c1f1e0d00b07d836649"} Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.552205 4710 scope.go:117] "RemoveContainer" containerID="739dbee0820156a6554c32a8264c90cabd429c04c249177fc7347cfeddb379ed" Nov 28 17:18:13 crc kubenswrapper[4710]: I1128 17:18:13.555954 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"21ecace1467a6acf33e65941af0782ce29c49a63234c381f8d58e66f58fe91ee"} Nov 28 17:18:14 crc kubenswrapper[4710]: I1128 17:18:14.576312 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"a453c2e71bf2cd04bfd21c83c846a02a4d9dc173d4778c32a637d56369950411"} Nov 28 17:18:15 crc kubenswrapper[4710]: I1128 17:18:15.601782 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"ffa0811004741a3a96efb79a03bf43c098d0bc0efb2e00e9bc05cf3f30f53499"} Nov 28 17:18:15 crc kubenswrapper[4710]: I1128 17:18:15.604194 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"21fd4e025722a9602a1e946aa30e2ca8c2a97b408a56cd641a1c9d99fc13a61e"} Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.458963 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.459977 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4h2ch" podUID="c9a14e8a-2aba-4827-8ff4-48858bec6075" containerName="ovn-controller" probeResult="failure" output=< Nov 28 17:18:16 crc kubenswrapper[4710]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 17:18:16 crc kubenswrapper[4710]: > Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.514430 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-t2rdj" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.775630 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4h2ch-config-fh68s"] Nov 28 17:18:16 crc kubenswrapper[4710]: E1128 17:18:16.776412 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b3dc001-22a3-4390-8d90-6769b184d2a0" containerName="swift-ring-rebalance" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.776434 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b3dc001-22a3-4390-8d90-6769b184d2a0" containerName="swift-ring-rebalance" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.776698 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b3dc001-22a3-4390-8d90-6769b184d2a0" containerName="swift-ring-rebalance" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.777459 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.817520 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-m6cp2"] Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.818989 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.828871 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fljhl\" (UniqueName: \"kubernetes.io/projected/a961a450-48ac-45fb-b979-5ff7a7407301-kube-api-access-fljhl\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.829004 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-scripts\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.829060 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run-ovn\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.829120 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-log-ovn\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.829317 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-additional-scripts\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.829416 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.835473 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.889933 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4h2ch-config-fh68s"] Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.904809 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m6cp2"] Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.921382 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-rrxrk"] Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.922558 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.926022 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932057 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fljhl\" (UniqueName: \"kubernetes.io/projected/a961a450-48ac-45fb-b979-5ff7a7407301-kube-api-access-fljhl\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932108 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac18c14-9769-4d41-b867-de23b4a81a79-operator-scripts\") pod \"cinder-db-create-m6cp2\" (UID: \"fac18c14-9769-4d41-b867-de23b4a81a79\") " pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932149 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-scripts\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932187 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run-ovn\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932220 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-log-ovn\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932265 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d654p\" (UniqueName: \"kubernetes.io/projected/fac18c14-9769-4d41-b867-de23b4a81a79-kube-api-access-d654p\") pod \"cinder-db-create-m6cp2\" (UID: \"fac18c14-9769-4d41-b867-de23b4a81a79\") " pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932311 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-additional-scripts\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932339 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.932655 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.933689 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-log-ovn\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.933741 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rrxrk"] Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.933815 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run-ovn\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.934322 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-additional-scripts\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.934841 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-scripts\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:16 crc kubenswrapper[4710]: I1128 17:18:16.984809 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fljhl\" (UniqueName: \"kubernetes.io/projected/a961a450-48ac-45fb-b979-5ff7a7407301-kube-api-access-fljhl\") pod \"ovn-controller-4h2ch-config-fh68s\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.013178 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ab78-account-create-update-pxdld"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.014699 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.031477 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.039851 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d654p\" (UniqueName: \"kubernetes.io/projected/fac18c14-9769-4d41-b867-de23b4a81a79-kube-api-access-d654p\") pod \"cinder-db-create-m6cp2\" (UID: \"fac18c14-9769-4d41-b867-de23b4a81a79\") " pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.039896 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl47x\" (UniqueName: \"kubernetes.io/projected/ff0ceecf-c774-4ee0-875b-44d4f58288a7-kube-api-access-wl47x\") pod \"barbican-db-create-rrxrk\" (UID: \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\") " pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.040009 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0ceecf-c774-4ee0-875b-44d4f58288a7-operator-scripts\") pod \"barbican-db-create-rrxrk\" (UID: \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\") " pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.040053 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac18c14-9769-4d41-b867-de23b4a81a79-operator-scripts\") pod \"cinder-db-create-m6cp2\" (UID: \"fac18c14-9769-4d41-b867-de23b4a81a79\") " pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.042005 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac18c14-9769-4d41-b867-de23b4a81a79-operator-scripts\") pod \"cinder-db-create-m6cp2\" (UID: \"fac18c14-9769-4d41-b867-de23b4a81a79\") " pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.113137 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.120616 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ab78-account-create-update-pxdld"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.142267 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl47x\" (UniqueName: \"kubernetes.io/projected/ff0ceecf-c774-4ee0-875b-44d4f58288a7-kube-api-access-wl47x\") pod \"barbican-db-create-rrxrk\" (UID: \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\") " pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.142327 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03ebace-5cad-464c-bcff-4ba2c6b50467-operator-scripts\") pod \"cinder-ab78-account-create-update-pxdld\" (UID: \"e03ebace-5cad-464c-bcff-4ba2c6b50467\") " pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.142393 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9f6s\" (UniqueName: \"kubernetes.io/projected/e03ebace-5cad-464c-bcff-4ba2c6b50467-kube-api-access-d9f6s\") pod \"cinder-ab78-account-create-update-pxdld\" (UID: \"e03ebace-5cad-464c-bcff-4ba2c6b50467\") " pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.142466 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0ceecf-c774-4ee0-875b-44d4f58288a7-operator-scripts\") pod \"barbican-db-create-rrxrk\" (UID: \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\") " pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.143247 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0ceecf-c774-4ee0-875b-44d4f58288a7-operator-scripts\") pod \"barbican-db-create-rrxrk\" (UID: \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\") " pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.159536 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d654p\" (UniqueName: \"kubernetes.io/projected/fac18c14-9769-4d41-b867-de23b4a81a79-kube-api-access-d654p\") pod \"cinder-db-create-m6cp2\" (UID: \"fac18c14-9769-4d41-b867-de23b4a81a79\") " pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.174191 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.214506 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-68ab-account-create-update-k9hvj"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.224950 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.232197 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.235491 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl47x\" (UniqueName: \"kubernetes.io/projected/ff0ceecf-c774-4ee0-875b-44d4f58288a7-kube-api-access-wl47x\") pod \"barbican-db-create-rrxrk\" (UID: \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\") " pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.247801 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhpf9\" (UniqueName: \"kubernetes.io/projected/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-kube-api-access-dhpf9\") pod \"barbican-68ab-account-create-update-k9hvj\" (UID: \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\") " pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.247966 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03ebace-5cad-464c-bcff-4ba2c6b50467-operator-scripts\") pod \"cinder-ab78-account-create-update-pxdld\" (UID: \"e03ebace-5cad-464c-bcff-4ba2c6b50467\") " pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.248029 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9f6s\" (UniqueName: \"kubernetes.io/projected/e03ebace-5cad-464c-bcff-4ba2c6b50467-kube-api-access-d9f6s\") pod \"cinder-ab78-account-create-update-pxdld\" (UID: \"e03ebace-5cad-464c-bcff-4ba2c6b50467\") " pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.248121 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-operator-scripts\") pod \"barbican-68ab-account-create-update-k9hvj\" (UID: \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\") " pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.250207 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.251161 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03ebace-5cad-464c-bcff-4ba2c6b50467-operator-scripts\") pod \"cinder-ab78-account-create-update-pxdld\" (UID: \"e03ebace-5cad-464c-bcff-4ba2c6b50467\") " pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.264612 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-68ab-account-create-update-k9hvj"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.301595 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9f6s\" (UniqueName: \"kubernetes.io/projected/e03ebace-5cad-464c-bcff-4ba2c6b50467-kube-api-access-d9f6s\") pod \"cinder-ab78-account-create-update-pxdld\" (UID: \"e03ebace-5cad-464c-bcff-4ba2c6b50467\") " pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.323701 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-m7vnw"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.352260 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-operator-scripts\") pod \"barbican-68ab-account-create-update-k9hvj\" (UID: \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\") " pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.352400 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhpf9\" (UniqueName: \"kubernetes.io/projected/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-kube-api-access-dhpf9\") pod \"barbican-68ab-account-create-update-k9hvj\" (UID: \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\") " pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.353738 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-operator-scripts\") pod \"barbican-68ab-account-create-update-k9hvj\" (UID: \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\") " pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.374205 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.384999 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.408916 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m7vnw"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.446662 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhpf9\" (UniqueName: \"kubernetes.io/projected/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-kube-api-access-dhpf9\") pod \"barbican-68ab-account-create-update-k9hvj\" (UID: \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\") " pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.531628 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3fab-account-create-update-b5ps7"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.536325 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.539256 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.549654 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3fab-account-create-update-b5ps7"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.559504 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-operator-scripts\") pod \"neutron-db-create-m7vnw\" (UID: \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\") " pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.559566 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6scqr\" (UniqueName: \"kubernetes.io/projected/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-kube-api-access-6scqr\") pod \"neutron-db-create-m7vnw\" (UID: \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\") " pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.656023 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-rt2kz"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.657294 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.658990 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.659200 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.659603 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xmd8n" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.659849 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.661012 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln6tz\" (UniqueName: \"kubernetes.io/projected/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-kube-api-access-ln6tz\") pod \"neutron-3fab-account-create-update-b5ps7\" (UID: \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\") " pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.661061 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-operator-scripts\") pod \"neutron-3fab-account-create-update-b5ps7\" (UID: \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\") " pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.661100 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-operator-scripts\") pod \"neutron-db-create-m7vnw\" (UID: \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\") " pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.661125 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6scqr\" (UniqueName: \"kubernetes.io/projected/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-kube-api-access-6scqr\") pod \"neutron-db-create-m7vnw\" (UID: \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\") " pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.662091 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-operator-scripts\") pod \"neutron-db-create-m7vnw\" (UID: \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\") " pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.667745 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rt2kz"] Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.700934 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6scqr\" (UniqueName: \"kubernetes.io/projected/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-kube-api-access-6scqr\") pod \"neutron-db-create-m7vnw\" (UID: \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\") " pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.715612 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.728475 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.762850 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln6tz\" (UniqueName: \"kubernetes.io/projected/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-kube-api-access-ln6tz\") pod \"neutron-3fab-account-create-update-b5ps7\" (UID: \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\") " pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.762945 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-operator-scripts\") pod \"neutron-3fab-account-create-update-b5ps7\" (UID: \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\") " pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.763282 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4dqf\" (UniqueName: \"kubernetes.io/projected/82dc6718-a141-4d1c-83b0-b08f4d5a8708-kube-api-access-j4dqf\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.763405 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-config-data\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.763512 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-combined-ca-bundle\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.763628 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-operator-scripts\") pod \"neutron-3fab-account-create-update-b5ps7\" (UID: \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\") " pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.779243 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln6tz\" (UniqueName: \"kubernetes.io/projected/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-kube-api-access-ln6tz\") pod \"neutron-3fab-account-create-update-b5ps7\" (UID: \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\") " pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.859991 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.864585 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-combined-ca-bundle\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.864645 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4dqf\" (UniqueName: \"kubernetes.io/projected/82dc6718-a141-4d1c-83b0-b08f4d5a8708-kube-api-access-j4dqf\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.864729 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-config-data\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.868680 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-combined-ca-bundle\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.868717 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-config-data\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.884365 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4dqf\" (UniqueName: \"kubernetes.io/projected/82dc6718-a141-4d1c-83b0-b08f4d5a8708-kube-api-access-j4dqf\") pod \"keystone-db-sync-rt2kz\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:17 crc kubenswrapper[4710]: I1128 17:18:17.992134 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:21 crc kubenswrapper[4710]: I1128 17:18:21.470173 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4h2ch" podUID="c9a14e8a-2aba-4827-8ff4-48858bec6075" containerName="ovn-controller" probeResult="failure" output=< Nov 28 17:18:21 crc kubenswrapper[4710]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 28 17:18:21 crc kubenswrapper[4710]: > Nov 28 17:18:24 crc kubenswrapper[4710]: E1128 17:18:24.419885 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 28 17:18:24 crc kubenswrapper[4710]: E1128 17:18:24.420583 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59bts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xw8td_openstack(a3835d37-f072-4310-a667-a7f398e80ab1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:18:24 crc kubenswrapper[4710]: E1128 17:18:24.421971 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xw8td" podUID="a3835d37-f072-4310-a667-a7f398e80ab1" Nov 28 17:18:24 crc kubenswrapper[4710]: E1128 17:18:24.719179 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xw8td" podUID="a3835d37-f072-4310-a667-a7f398e80ab1" Nov 28 17:18:24 crc kubenswrapper[4710]: I1128 17:18:24.891388 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4h2ch-config-fh68s"] Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.357708 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-m7vnw"] Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.373875 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3fab-account-create-update-b5ps7"] Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.387429 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-rrxrk"] Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.397120 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-m6cp2"] Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.407971 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ab78-account-create-update-pxdld"] Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.417063 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-68ab-account-create-update-k9hvj"] Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.423381 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-rt2kz"] Nov 28 17:18:25 crc kubenswrapper[4710]: W1128 17:18:25.457534 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fdd3903_bbd8_4721_ae3b_866cbc2a73a7.slice/crio-31bb36b5506ad5b0a3fb2d6ca7763808b8a0614ff88e0a71a62eec073a8d89f6 WatchSource:0}: Error finding container 31bb36b5506ad5b0a3fb2d6ca7763808b8a0614ff88e0a71a62eec073a8d89f6: Status 404 returned error can't find the container with id 31bb36b5506ad5b0a3fb2d6ca7763808b8a0614ff88e0a71a62eec073a8d89f6 Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.719670 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m7vnw" event={"ID":"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8","Type":"ContainerStarted","Data":"6f90f7e3a94ed8431ac7dd2b20498bed4c5dfbdca9e7efd14e127dcfc5c968e6"} Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.745814 4710 generic.go:334] "Generic (PLEG): container finished" podID="a961a450-48ac-45fb-b979-5ff7a7407301" containerID="2429445f978de6ee97187b91c0c20a14eb1a3415cd961c14bbfe88858358d74f" exitCode=0 Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.745883 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4h2ch-config-fh68s" event={"ID":"a961a450-48ac-45fb-b979-5ff7a7407301","Type":"ContainerDied","Data":"2429445f978de6ee97187b91c0c20a14eb1a3415cd961c14bbfe88858358d74f"} Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.745909 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4h2ch-config-fh68s" event={"ID":"a961a450-48ac-45fb-b979-5ff7a7407301","Type":"ContainerStarted","Data":"79f874d3a7a0a3d11cf280faaedd88683b089d4a731ef6cdf2de3fcb7325f3ee"} Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.750907 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-68ab-account-create-update-k9hvj" event={"ID":"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7","Type":"ContainerStarted","Data":"31bb36b5506ad5b0a3fb2d6ca7763808b8a0614ff88e0a71a62eec073a8d89f6"} Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.756566 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rt2kz" event={"ID":"82dc6718-a141-4d1c-83b0-b08f4d5a8708","Type":"ContainerStarted","Data":"efdc84cfae7587712f5f5f662dd33d03db88405682e8ad5863b84d2f4c77c616"} Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.759609 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6cp2" event={"ID":"fac18c14-9769-4d41-b867-de23b4a81a79","Type":"ContainerStarted","Data":"f621e1fe0d8acd1517339f0ffcfb1bef19dbf87a7551fdc74af80d3407058e06"} Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.762272 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rrxrk" event={"ID":"ff0ceecf-c774-4ee0-875b-44d4f58288a7","Type":"ContainerStarted","Data":"d3df097a09c110c2b3d2b8626cd94b136d2b2ec2f6e3d1e849fd44a690a8430f"} Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.772455 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ab78-account-create-update-pxdld" event={"ID":"e03ebace-5cad-464c-bcff-4ba2c6b50467","Type":"ContainerStarted","Data":"bdcfd1005ecd36edb53f4e19bf7266dc66a46a54bb76b03fc22a0c349129e35d"} Nov 28 17:18:25 crc kubenswrapper[4710]: I1128 17:18:25.775544 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3fab-account-create-update-b5ps7" event={"ID":"dc0af7ed-b562-4fcf-aaa1-f8b769241a67","Type":"ContainerStarted","Data":"cd3e47772e061eece570e1bbc50685aba29a0b58674aa5b20fb33f0bf8716529"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.469146 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4h2ch" Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.871255 4710 generic.go:334] "Generic (PLEG): container finished" podID="e03ebace-5cad-464c-bcff-4ba2c6b50467" containerID="74358610562ca38634a776eaaaed7138a2760140e166c7e08d5e1c9dd7c1335c" exitCode=0 Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.871650 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ab78-account-create-update-pxdld" event={"ID":"e03ebace-5cad-464c-bcff-4ba2c6b50467","Type":"ContainerDied","Data":"74358610562ca38634a776eaaaed7138a2760140e166c7e08d5e1c9dd7c1335c"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.897425 4710 generic.go:334] "Generic (PLEG): container finished" podID="dc0af7ed-b562-4fcf-aaa1-f8b769241a67" containerID="cc63ebbeb782c1957f113a9d46257d10dec462ae32af3889d56f766308a77fcb" exitCode=0 Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.897541 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3fab-account-create-update-b5ps7" event={"ID":"dc0af7ed-b562-4fcf-aaa1-f8b769241a67","Type":"ContainerDied","Data":"cc63ebbeb782c1957f113a9d46257d10dec462ae32af3889d56f766308a77fcb"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.906108 4710 generic.go:334] "Generic (PLEG): container finished" podID="ebce039b-b25e-4102-bfd5-f55b7f0fa9b8" containerID="99ce6b0bc21226ae929aef16d5409005cf8a1690d76cde10a8a9ef6255fef34f" exitCode=0 Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.906246 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m7vnw" event={"ID":"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8","Type":"ContainerDied","Data":"99ce6b0bc21226ae929aef16d5409005cf8a1690d76cde10a8a9ef6255fef34f"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.911948 4710 generic.go:334] "Generic (PLEG): container finished" podID="3fdd3903-bbd8-4721-ae3b-866cbc2a73a7" containerID="cb98005ba3317cd7fba72c4655926b8c9a2ec6c45621dcfb53deff26b1c2bd50" exitCode=0 Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.912016 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-68ab-account-create-update-k9hvj" event={"ID":"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7","Type":"ContainerDied","Data":"cb98005ba3317cd7fba72c4655926b8c9a2ec6c45621dcfb53deff26b1c2bd50"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.913652 4710 generic.go:334] "Generic (PLEG): container finished" podID="fac18c14-9769-4d41-b867-de23b4a81a79" containerID="8b80f8bc25903344d34bef3d1815d369c98f0f341ea0d5e3cd7c4f8592c1ebc6" exitCode=0 Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.913694 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6cp2" event={"ID":"fac18c14-9769-4d41-b867-de23b4a81a79","Type":"ContainerDied","Data":"8b80f8bc25903344d34bef3d1815d369c98f0f341ea0d5e3cd7c4f8592c1ebc6"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.917681 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"03081a31d74d135b779a740132e0be084b3afe99c44be0ea3be6e72464cfc574"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.917724 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"120bfb3a5cbec12ea0520183162304cebdc11c4dfa62157710f8a8880db83659"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.917734 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"888614931ca003b256c83f272465c994161250fc500e09bda60b2756cf52003d"} Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.919682 4710 generic.go:334] "Generic (PLEG): container finished" podID="ff0ceecf-c774-4ee0-875b-44d4f58288a7" containerID="8a725516ff45127561ac16fc0247e7efe9a6998a964fcbe71a7f3a44e88519ee" exitCode=0 Nov 28 17:18:26 crc kubenswrapper[4710]: I1128 17:18:26.919862 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rrxrk" event={"ID":"ff0ceecf-c774-4ee0-875b-44d4f58288a7","Type":"ContainerDied","Data":"8a725516ff45127561ac16fc0247e7efe9a6998a964fcbe71a7f3a44e88519ee"} Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.346900 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486014 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-scripts\") pod \"a961a450-48ac-45fb-b979-5ff7a7407301\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486078 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-additional-scripts\") pod \"a961a450-48ac-45fb-b979-5ff7a7407301\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486150 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run\") pod \"a961a450-48ac-45fb-b979-5ff7a7407301\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486196 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fljhl\" (UniqueName: \"kubernetes.io/projected/a961a450-48ac-45fb-b979-5ff7a7407301-kube-api-access-fljhl\") pod \"a961a450-48ac-45fb-b979-5ff7a7407301\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486218 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run-ovn\") pod \"a961a450-48ac-45fb-b979-5ff7a7407301\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486313 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-log-ovn\") pod \"a961a450-48ac-45fb-b979-5ff7a7407301\" (UID: \"a961a450-48ac-45fb-b979-5ff7a7407301\") " Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486604 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "a961a450-48ac-45fb-b979-5ff7a7407301" (UID: "a961a450-48ac-45fb-b979-5ff7a7407301"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486639 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "a961a450-48ac-45fb-b979-5ff7a7407301" (UID: "a961a450-48ac-45fb-b979-5ff7a7407301"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486663 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run" (OuterVolumeSpecName: "var-run") pod "a961a450-48ac-45fb-b979-5ff7a7407301" (UID: "a961a450-48ac-45fb-b979-5ff7a7407301"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486883 4710 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486899 4710 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.486908 4710 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/a961a450-48ac-45fb-b979-5ff7a7407301-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.487175 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "a961a450-48ac-45fb-b979-5ff7a7407301" (UID: "a961a450-48ac-45fb-b979-5ff7a7407301"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.487604 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-scripts" (OuterVolumeSpecName: "scripts") pod "a961a450-48ac-45fb-b979-5ff7a7407301" (UID: "a961a450-48ac-45fb-b979-5ff7a7407301"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.491616 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a961a450-48ac-45fb-b979-5ff7a7407301-kube-api-access-fljhl" (OuterVolumeSpecName: "kube-api-access-fljhl") pod "a961a450-48ac-45fb-b979-5ff7a7407301" (UID: "a961a450-48ac-45fb-b979-5ff7a7407301"). InnerVolumeSpecName "kube-api-access-fljhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.588543 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.588596 4710 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/a961a450-48ac-45fb-b979-5ff7a7407301-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.588607 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fljhl\" (UniqueName: \"kubernetes.io/projected/a961a450-48ac-45fb-b979-5ff7a7407301-kube-api-access-fljhl\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.932137 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"75635ecf821add230c504bb3031bab909ec9783a4ca3e91f3e0ca2d82b36bd7b"} Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.933952 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4h2ch-config-fh68s" event={"ID":"a961a450-48ac-45fb-b979-5ff7a7407301","Type":"ContainerDied","Data":"79f874d3a7a0a3d11cf280faaedd88683b089d4a731ef6cdf2de3fcb7325f3ee"} Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.933999 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f874d3a7a0a3d11cf280faaedd88683b089d4a731ef6cdf2de3fcb7325f3ee" Nov 28 17:18:27 crc kubenswrapper[4710]: I1128 17:18:27.934107 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch-config-fh68s" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.469738 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4h2ch-config-fh68s"] Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.474119 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4h2ch-config-fh68s"] Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.575326 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4h2ch-config-vnr5f"] Nov 28 17:18:28 crc kubenswrapper[4710]: E1128 17:18:28.575723 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a961a450-48ac-45fb-b979-5ff7a7407301" containerName="ovn-config" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.575735 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="a961a450-48ac-45fb-b979-5ff7a7407301" containerName="ovn-config" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.576006 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="a961a450-48ac-45fb-b979-5ff7a7407301" containerName="ovn-config" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.576721 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.581701 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.588890 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4h2ch-config-vnr5f"] Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.709241 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxn4q\" (UniqueName: \"kubernetes.io/projected/8cabd31d-a832-4c72-b37b-a6d889378e47-kube-api-access-dxn4q\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.709369 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-log-ovn\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.709411 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.709440 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-scripts\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.709465 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-additional-scripts\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.709699 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run-ovn\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.810969 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-log-ovn\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.811046 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.811090 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-scripts\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.811126 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-additional-scripts\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.811194 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run-ovn\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.811231 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxn4q\" (UniqueName: \"kubernetes.io/projected/8cabd31d-a832-4c72-b37b-a6d889378e47-kube-api-access-dxn4q\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.811388 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.811400 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run-ovn\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.812567 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-additional-scripts\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.813738 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-scripts\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.813821 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-log-ovn\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.843319 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxn4q\" (UniqueName: \"kubernetes.io/projected/8cabd31d-a832-4c72-b37b-a6d889378e47-kube-api-access-dxn4q\") pod \"ovn-controller-4h2ch-config-vnr5f\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.897843 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.936172 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.944559 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.949054 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-m7vnw" event={"ID":"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8","Type":"ContainerDied","Data":"6f90f7e3a94ed8431ac7dd2b20498bed4c5dfbdca9e7efd14e127dcfc5c968e6"} Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.949085 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f90f7e3a94ed8431ac7dd2b20498bed4c5dfbdca9e7efd14e127dcfc5c968e6" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.949130 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-m7vnw" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.952527 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.954017 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ab78-account-create-update-pxdld" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.954017 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ab78-account-create-update-pxdld" event={"ID":"e03ebace-5cad-464c-bcff-4ba2c6b50467","Type":"ContainerDied","Data":"bdcfd1005ecd36edb53f4e19bf7266dc66a46a54bb76b03fc22a0c349129e35d"} Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.954124 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdcfd1005ecd36edb53f4e19bf7266dc66a46a54bb76b03fc22a0c349129e35d" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.956222 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-rrxrk" event={"ID":"ff0ceecf-c774-4ee0-875b-44d4f58288a7","Type":"ContainerDied","Data":"d3df097a09c110c2b3d2b8626cd94b136d2b2ec2f6e3d1e849fd44a690a8430f"} Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.956245 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3df097a09c110c2b3d2b8626cd94b136d2b2ec2f6e3d1e849fd44a690a8430f" Nov 28 17:18:28 crc kubenswrapper[4710]: I1128 17:18:28.956280 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-rrxrk" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.013531 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl47x\" (UniqueName: \"kubernetes.io/projected/ff0ceecf-c774-4ee0-875b-44d4f58288a7-kube-api-access-wl47x\") pod \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\" (UID: \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\") " Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.014522 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0ceecf-c774-4ee0-875b-44d4f58288a7-operator-scripts\") pod \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\" (UID: \"ff0ceecf-c774-4ee0-875b-44d4f58288a7\") " Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.014660 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9f6s\" (UniqueName: \"kubernetes.io/projected/e03ebace-5cad-464c-bcff-4ba2c6b50467-kube-api-access-d9f6s\") pod \"e03ebace-5cad-464c-bcff-4ba2c6b50467\" (UID: \"e03ebace-5cad-464c-bcff-4ba2c6b50467\") " Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.014703 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-operator-scripts\") pod \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\" (UID: \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\") " Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.014879 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03ebace-5cad-464c-bcff-4ba2c6b50467-operator-scripts\") pod \"e03ebace-5cad-464c-bcff-4ba2c6b50467\" (UID: \"e03ebace-5cad-464c-bcff-4ba2c6b50467\") " Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.014927 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6scqr\" (UniqueName: \"kubernetes.io/projected/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-kube-api-access-6scqr\") pod \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\" (UID: \"ebce039b-b25e-4102-bfd5-f55b7f0fa9b8\") " Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.015213 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff0ceecf-c774-4ee0-875b-44d4f58288a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ff0ceecf-c774-4ee0-875b-44d4f58288a7" (UID: "ff0ceecf-c774-4ee0-875b-44d4f58288a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.015287 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ebce039b-b25e-4102-bfd5-f55b7f0fa9b8" (UID: "ebce039b-b25e-4102-bfd5-f55b7f0fa9b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.015429 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e03ebace-5cad-464c-bcff-4ba2c6b50467-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e03ebace-5cad-464c-bcff-4ba2c6b50467" (UID: "e03ebace-5cad-464c-bcff-4ba2c6b50467"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.015916 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ff0ceecf-c774-4ee0-875b-44d4f58288a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.015962 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.015974 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e03ebace-5cad-464c-bcff-4ba2c6b50467-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.019600 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e03ebace-5cad-464c-bcff-4ba2c6b50467-kube-api-access-d9f6s" (OuterVolumeSpecName: "kube-api-access-d9f6s") pod "e03ebace-5cad-464c-bcff-4ba2c6b50467" (UID: "e03ebace-5cad-464c-bcff-4ba2c6b50467"). InnerVolumeSpecName "kube-api-access-d9f6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.032013 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff0ceecf-c774-4ee0-875b-44d4f58288a7-kube-api-access-wl47x" (OuterVolumeSpecName: "kube-api-access-wl47x") pod "ff0ceecf-c774-4ee0-875b-44d4f58288a7" (UID: "ff0ceecf-c774-4ee0-875b-44d4f58288a7"). InnerVolumeSpecName "kube-api-access-wl47x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.033112 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-kube-api-access-6scqr" (OuterVolumeSpecName: "kube-api-access-6scqr") pod "ebce039b-b25e-4102-bfd5-f55b7f0fa9b8" (UID: "ebce039b-b25e-4102-bfd5-f55b7f0fa9b8"). InnerVolumeSpecName "kube-api-access-6scqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.117878 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl47x\" (UniqueName: \"kubernetes.io/projected/ff0ceecf-c774-4ee0-875b-44d4f58288a7-kube-api-access-wl47x\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.117913 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9f6s\" (UniqueName: \"kubernetes.io/projected/e03ebace-5cad-464c-bcff-4ba2c6b50467-kube-api-access-d9f6s\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.117923 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6scqr\" (UniqueName: \"kubernetes.io/projected/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8-kube-api-access-6scqr\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:29 crc kubenswrapper[4710]: I1128 17:18:29.177584 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a961a450-48ac-45fb-b979-5ff7a7407301" path="/var/lib/kubelet/pods/a961a450-48ac-45fb-b979-5ff7a7407301/volumes" Nov 28 17:18:29 crc kubenswrapper[4710]: E1128 17:18:29.386920 4710 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebce039b_b25e_4102_bfd5_f55b7f0fa9b8.slice/crio-6f90f7e3a94ed8431ac7dd2b20498bed4c5dfbdca9e7efd14e127dcfc5c968e6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode03ebace_5cad_464c_bcff_4ba2c6b50467.slice/crio-bdcfd1005ecd36edb53f4e19bf7266dc66a46a54bb76b03fc22a0c349129e35d\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff0ceecf_c774_4ee0_875b_44d4f58288a7.slice\": RecentStats: unable to find data in memory cache]" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.759218 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.788188 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.795800 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.889853 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac18c14-9769-4d41-b867-de23b4a81a79-operator-scripts\") pod \"fac18c14-9769-4d41-b867-de23b4a81a79\" (UID: \"fac18c14-9769-4d41-b867-de23b4a81a79\") " Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.889948 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhpf9\" (UniqueName: \"kubernetes.io/projected/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-kube-api-access-dhpf9\") pod \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\" (UID: \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\") " Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.890010 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-operator-scripts\") pod \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\" (UID: \"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7\") " Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.890056 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d654p\" (UniqueName: \"kubernetes.io/projected/fac18c14-9769-4d41-b867-de23b4a81a79-kube-api-access-d654p\") pod \"fac18c14-9769-4d41-b867-de23b4a81a79\" (UID: \"fac18c14-9769-4d41-b867-de23b4a81a79\") " Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.890119 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln6tz\" (UniqueName: \"kubernetes.io/projected/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-kube-api-access-ln6tz\") pod \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\" (UID: \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\") " Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.890146 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-operator-scripts\") pod \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\" (UID: \"dc0af7ed-b562-4fcf-aaa1-f8b769241a67\") " Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.890678 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fac18c14-9769-4d41-b867-de23b4a81a79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fac18c14-9769-4d41-b867-de23b4a81a79" (UID: "fac18c14-9769-4d41-b867-de23b4a81a79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.890692 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc0af7ed-b562-4fcf-aaa1-f8b769241a67" (UID: "dc0af7ed-b562-4fcf-aaa1-f8b769241a67"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.890680 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fdd3903-bbd8-4721-ae3b-866cbc2a73a7" (UID: "3fdd3903-bbd8-4721-ae3b-866cbc2a73a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.894647 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-kube-api-access-ln6tz" (OuterVolumeSpecName: "kube-api-access-ln6tz") pod "dc0af7ed-b562-4fcf-aaa1-f8b769241a67" (UID: "dc0af7ed-b562-4fcf-aaa1-f8b769241a67"). InnerVolumeSpecName "kube-api-access-ln6tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.894698 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-kube-api-access-dhpf9" (OuterVolumeSpecName: "kube-api-access-dhpf9") pod "3fdd3903-bbd8-4721-ae3b-866cbc2a73a7" (UID: "3fdd3903-bbd8-4721-ae3b-866cbc2a73a7"). InnerVolumeSpecName "kube-api-access-dhpf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.895986 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fac18c14-9769-4d41-b867-de23b4a81a79-kube-api-access-d654p" (OuterVolumeSpecName: "kube-api-access-d654p") pod "fac18c14-9769-4d41-b867-de23b4a81a79" (UID: "fac18c14-9769-4d41-b867-de23b4a81a79"). InnerVolumeSpecName "kube-api-access-d654p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.979231 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4h2ch-config-vnr5f"] Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.991894 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhpf9\" (UniqueName: \"kubernetes.io/projected/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-kube-api-access-dhpf9\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.991967 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.991978 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d654p\" (UniqueName: \"kubernetes.io/projected/fac18c14-9769-4d41-b867-de23b4a81a79-kube-api-access-d654p\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.991987 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln6tz\" (UniqueName: \"kubernetes.io/projected/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-kube-api-access-ln6tz\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.991995 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc0af7ed-b562-4fcf-aaa1-f8b769241a67-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:32 crc kubenswrapper[4710]: I1128 17:18:32.992003 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fac18c14-9769-4d41-b867-de23b4a81a79-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:33 crc kubenswrapper[4710]: W1128 17:18:33.007953 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8cabd31d_a832_4c72_b37b_a6d889378e47.slice/crio-2b71b0f47f27a2e13bdcdb076f81228869e33b7fd88ccffcf0fc633fcd4b833c WatchSource:0}: Error finding container 2b71b0f47f27a2e13bdcdb076f81228869e33b7fd88ccffcf0fc633fcd4b833c: Status 404 returned error can't find the container with id 2b71b0f47f27a2e13bdcdb076f81228869e33b7fd88ccffcf0fc633fcd4b833c Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.008547 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"35a799331f654535a595437d7b1b3fc695bc14d339162a3ef5104fa296dba2e7"} Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.013716 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3fab-account-create-update-b5ps7" Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.013730 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3fab-account-create-update-b5ps7" event={"ID":"dc0af7ed-b562-4fcf-aaa1-f8b769241a67","Type":"ContainerDied","Data":"cd3e47772e061eece570e1bbc50685aba29a0b58674aa5b20fb33f0bf8716529"} Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.013799 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd3e47772e061eece570e1bbc50685aba29a0b58674aa5b20fb33f0bf8716529" Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.015694 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-68ab-account-create-update-k9hvj" event={"ID":"3fdd3903-bbd8-4721-ae3b-866cbc2a73a7","Type":"ContainerDied","Data":"31bb36b5506ad5b0a3fb2d6ca7763808b8a0614ff88e0a71a62eec073a8d89f6"} Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.015730 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31bb36b5506ad5b0a3fb2d6ca7763808b8a0614ff88e0a71a62eec073a8d89f6" Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.015707 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-68ab-account-create-update-k9hvj" Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.017316 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rt2kz" event={"ID":"82dc6718-a141-4d1c-83b0-b08f4d5a8708","Type":"ContainerStarted","Data":"9d349382d11f56eb75155aeae7b9d92047fb6e48c98fac5f8a3db865e03a0a54"} Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.020533 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-m6cp2" event={"ID":"fac18c14-9769-4d41-b867-de23b4a81a79","Type":"ContainerDied","Data":"f621e1fe0d8acd1517339f0ffcfb1bef19dbf87a7551fdc74af80d3407058e06"} Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.020600 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f621e1fe0d8acd1517339f0ffcfb1bef19dbf87a7551fdc74af80d3407058e06" Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.020678 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-m6cp2" Nov 28 17:18:33 crc kubenswrapper[4710]: I1128 17:18:33.045846 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-rt2kz" podStartSLOduration=8.976374828 podStartE2EDuration="16.045826795s" podCreationTimestamp="2025-11-28 17:18:17 +0000 UTC" firstStartedPulling="2025-11-28 17:18:25.448953135 +0000 UTC m=+1194.707253180" lastFinishedPulling="2025-11-28 17:18:32.518405102 +0000 UTC m=+1201.776705147" observedRunningTime="2025-11-28 17:18:33.039198496 +0000 UTC m=+1202.297498551" watchObservedRunningTime="2025-11-28 17:18:33.045826795 +0000 UTC m=+1202.304126850" Nov 28 17:18:34 crc kubenswrapper[4710]: I1128 17:18:34.078017 4710 generic.go:334] "Generic (PLEG): container finished" podID="8cabd31d-a832-4c72-b37b-a6d889378e47" containerID="670968ed8d0ca14e5820522e131f1e9115dfbdd62f7f6b6cd1010a5b9df4d3fc" exitCode=0 Nov 28 17:18:34 crc kubenswrapper[4710]: I1128 17:18:34.078588 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4h2ch-config-vnr5f" event={"ID":"8cabd31d-a832-4c72-b37b-a6d889378e47","Type":"ContainerDied","Data":"670968ed8d0ca14e5820522e131f1e9115dfbdd62f7f6b6cd1010a5b9df4d3fc"} Nov 28 17:18:34 crc kubenswrapper[4710]: I1128 17:18:34.078620 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4h2ch-config-vnr5f" event={"ID":"8cabd31d-a832-4c72-b37b-a6d889378e47","Type":"ContainerStarted","Data":"2b71b0f47f27a2e13bdcdb076f81228869e33b7fd88ccffcf0fc633fcd4b833c"} Nov 28 17:18:34 crc kubenswrapper[4710]: I1128 17:18:34.090194 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"973d11827f10631e07c1d6135e0646a2fcf8970f946bdab41903583697434cc5"} Nov 28 17:18:34 crc kubenswrapper[4710]: I1128 17:18:34.090240 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"209b5fcfd61756f3f064335518012841b26415f33fea6fa402534cc5f9a81007"} Nov 28 17:18:34 crc kubenswrapper[4710]: I1128 17:18:34.090251 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"e4c670b7f264d3e25b9c5520bd1c57066f9817789b4d7474dffa1a4d18c21ac3"} Nov 28 17:18:34 crc kubenswrapper[4710]: I1128 17:18:34.090261 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"64b30f5ffc982c0214579e4e31a11b2a771d200241e42ce1c3033f27f80d4f48"} Nov 28 17:18:34 crc kubenswrapper[4710]: I1128 17:18:34.090271 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"c71733cda2cf9ea02124a405ea0fe3e97b4bd286ac834da3f43e69c7153e7445"} Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.113826 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"96a67841-bed8-4758-a152-31602db98d49","Type":"ContainerStarted","Data":"2400abca3a3416742406ead02c4153eff1d0820ecc5ee453fab385bd11be2052"} Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.184465 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=21.257164672 podStartE2EDuration="44.184438873s" podCreationTimestamp="2025-11-28 17:17:51 +0000 UTC" firstStartedPulling="2025-11-28 17:18:09.581961534 +0000 UTC m=+1178.840261569" lastFinishedPulling="2025-11-28 17:18:32.509235725 +0000 UTC m=+1201.767535770" observedRunningTime="2025-11-28 17:18:35.159446248 +0000 UTC m=+1204.417746333" watchObservedRunningTime="2025-11-28 17:18:35.184438873 +0000 UTC m=+1204.442738958" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.460616 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-bhnkl"] Nov 28 17:18:35 crc kubenswrapper[4710]: E1128 17:18:35.461533 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc0af7ed-b562-4fcf-aaa1-f8b769241a67" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.461632 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc0af7ed-b562-4fcf-aaa1-f8b769241a67" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: E1128 17:18:35.461700 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fac18c14-9769-4d41-b867-de23b4a81a79" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.461754 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fac18c14-9769-4d41-b867-de23b4a81a79" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: E1128 17:18:35.462151 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff0ceecf-c774-4ee0-875b-44d4f58288a7" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.462217 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff0ceecf-c774-4ee0-875b-44d4f58288a7" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: E1128 17:18:35.462282 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fdd3903-bbd8-4721-ae3b-866cbc2a73a7" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.462345 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fdd3903-bbd8-4721-ae3b-866cbc2a73a7" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: E1128 17:18:35.462445 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e03ebace-5cad-464c-bcff-4ba2c6b50467" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.462507 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e03ebace-5cad-464c-bcff-4ba2c6b50467" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: E1128 17:18:35.462574 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebce039b-b25e-4102-bfd5-f55b7f0fa9b8" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.462642 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebce039b-b25e-4102-bfd5-f55b7f0fa9b8" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.469567 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebce039b-b25e-4102-bfd5-f55b7f0fa9b8" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.469705 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="fac18c14-9769-4d41-b867-de23b4a81a79" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.469803 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e03ebace-5cad-464c-bcff-4ba2c6b50467" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.469884 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc0af7ed-b562-4fcf-aaa1-f8b769241a67" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.470001 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fdd3903-bbd8-4721-ae3b-866cbc2a73a7" containerName="mariadb-account-create-update" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.470074 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff0ceecf-c774-4ee0-875b-44d4f58288a7" containerName="mariadb-database-create" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.471376 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.473503 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.474253 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-bhnkl"] Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.543368 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.543410 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-config\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.543508 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.543542 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.543560 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xll6b\" (UniqueName: \"kubernetes.io/projected/326b2f12-36ee-4772-820e-4f03c5919bd0-kube-api-access-xll6b\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.543584 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.554114 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.644568 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-log-ovn\") pod \"8cabd31d-a832-4c72-b37b-a6d889378e47\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.644672 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8cabd31d-a832-4c72-b37b-a6d889378e47" (UID: "8cabd31d-a832-4c72-b37b-a6d889378e47"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.644748 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-additional-scripts\") pod \"8cabd31d-a832-4c72-b37b-a6d889378e47\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.644847 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run-ovn\") pod \"8cabd31d-a832-4c72-b37b-a6d889378e47\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.644864 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run\") pod \"8cabd31d-a832-4c72-b37b-a6d889378e47\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.644935 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-scripts\") pod \"8cabd31d-a832-4c72-b37b-a6d889378e47\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.644959 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8cabd31d-a832-4c72-b37b-a6d889378e47" (UID: "8cabd31d-a832-4c72-b37b-a6d889378e47"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.644981 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxn4q\" (UniqueName: \"kubernetes.io/projected/8cabd31d-a832-4c72-b37b-a6d889378e47-kube-api-access-dxn4q\") pod \"8cabd31d-a832-4c72-b37b-a6d889378e47\" (UID: \"8cabd31d-a832-4c72-b37b-a6d889378e47\") " Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645008 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run" (OuterVolumeSpecName: "var-run") pod "8cabd31d-a832-4c72-b37b-a6d889378e47" (UID: "8cabd31d-a832-4c72-b37b-a6d889378e47"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645216 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645304 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-config\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645319 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645451 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645486 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645503 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xll6b\" (UniqueName: \"kubernetes.io/projected/326b2f12-36ee-4772-820e-4f03c5919bd0-kube-api-access-xll6b\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645526 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8cabd31d-a832-4c72-b37b-a6d889378e47" (UID: "8cabd31d-a832-4c72-b37b-a6d889378e47"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645567 4710 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645586 4710 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645598 4710 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8cabd31d-a832-4c72-b37b-a6d889378e47-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.645902 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-scripts" (OuterVolumeSpecName: "scripts") pod "8cabd31d-a832-4c72-b37b-a6d889378e47" (UID: "8cabd31d-a832-4c72-b37b-a6d889378e47"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.646398 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.646446 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.646814 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-config\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.646977 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.647249 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.665311 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xll6b\" (UniqueName: \"kubernetes.io/projected/326b2f12-36ee-4772-820e-4f03c5919bd0-kube-api-access-xll6b\") pod \"dnsmasq-dns-5c79d794d7-bhnkl\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.667000 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cabd31d-a832-4c72-b37b-a6d889378e47-kube-api-access-dxn4q" (OuterVolumeSpecName: "kube-api-access-dxn4q") pod "8cabd31d-a832-4c72-b37b-a6d889378e47" (UID: "8cabd31d-a832-4c72-b37b-a6d889378e47"). InnerVolumeSpecName "kube-api-access-dxn4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.747120 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.747158 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxn4q\" (UniqueName: \"kubernetes.io/projected/8cabd31d-a832-4c72-b37b-a6d889378e47-kube-api-access-dxn4q\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.747169 4710 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8cabd31d-a832-4c72-b37b-a6d889378e47-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:35 crc kubenswrapper[4710]: I1128 17:18:35.863950 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:36 crc kubenswrapper[4710]: I1128 17:18:36.123908 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4h2ch-config-vnr5f" event={"ID":"8cabd31d-a832-4c72-b37b-a6d889378e47","Type":"ContainerDied","Data":"2b71b0f47f27a2e13bdcdb076f81228869e33b7fd88ccffcf0fc633fcd4b833c"} Nov 28 17:18:36 crc kubenswrapper[4710]: I1128 17:18:36.124190 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b71b0f47f27a2e13bdcdb076f81228869e33b7fd88ccffcf0fc633fcd4b833c" Nov 28 17:18:36 crc kubenswrapper[4710]: I1128 17:18:36.124246 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4h2ch-config-vnr5f" Nov 28 17:18:36 crc kubenswrapper[4710]: I1128 17:18:36.127162 4710 generic.go:334] "Generic (PLEG): container finished" podID="82dc6718-a141-4d1c-83b0-b08f4d5a8708" containerID="9d349382d11f56eb75155aeae7b9d92047fb6e48c98fac5f8a3db865e03a0a54" exitCode=0 Nov 28 17:18:36 crc kubenswrapper[4710]: I1128 17:18:36.127261 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rt2kz" event={"ID":"82dc6718-a141-4d1c-83b0-b08f4d5a8708","Type":"ContainerDied","Data":"9d349382d11f56eb75155aeae7b9d92047fb6e48c98fac5f8a3db865e03a0a54"} Nov 28 17:18:36 crc kubenswrapper[4710]: I1128 17:18:36.314324 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-bhnkl"] Nov 28 17:18:36 crc kubenswrapper[4710]: W1128 17:18:36.317033 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod326b2f12_36ee_4772_820e_4f03c5919bd0.slice/crio-ebc370a3eb04c40e7cd82803c741e3e7e22204b79411b2c43022f9a57cbded1b WatchSource:0}: Error finding container ebc370a3eb04c40e7cd82803c741e3e7e22204b79411b2c43022f9a57cbded1b: Status 404 returned error can't find the container with id ebc370a3eb04c40e7cd82803c741e3e7e22204b79411b2c43022f9a57cbded1b Nov 28 17:18:36 crc kubenswrapper[4710]: I1128 17:18:36.626031 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4h2ch-config-vnr5f"] Nov 28 17:18:36 crc kubenswrapper[4710]: I1128 17:18:36.633033 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4h2ch-config-vnr5f"] Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.156631 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.171960 4710 generic.go:334] "Generic (PLEG): container finished" podID="326b2f12-36ee-4772-820e-4f03c5919bd0" containerID="0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820" exitCode=0 Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.208842 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cabd31d-a832-4c72-b37b-a6d889378e47" path="/var/lib/kubelet/pods/8cabd31d-a832-4c72-b37b-a6d889378e47/volumes" Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.209566 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" event={"ID":"326b2f12-36ee-4772-820e-4f03c5919bd0","Type":"ContainerDied","Data":"0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820"} Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.209597 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" event={"ID":"326b2f12-36ee-4772-820e-4f03c5919bd0","Type":"ContainerStarted","Data":"ebc370a3eb04c40e7cd82803c741e3e7e22204b79411b2c43022f9a57cbded1b"} Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.652648 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.788453 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-config-data\") pod \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.788646 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-combined-ca-bundle\") pod \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.788717 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4dqf\" (UniqueName: \"kubernetes.io/projected/82dc6718-a141-4d1c-83b0-b08f4d5a8708-kube-api-access-j4dqf\") pod \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\" (UID: \"82dc6718-a141-4d1c-83b0-b08f4d5a8708\") " Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.794451 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82dc6718-a141-4d1c-83b0-b08f4d5a8708-kube-api-access-j4dqf" (OuterVolumeSpecName: "kube-api-access-j4dqf") pod "82dc6718-a141-4d1c-83b0-b08f4d5a8708" (UID: "82dc6718-a141-4d1c-83b0-b08f4d5a8708"). InnerVolumeSpecName "kube-api-access-j4dqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.825492 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82dc6718-a141-4d1c-83b0-b08f4d5a8708" (UID: "82dc6718-a141-4d1c-83b0-b08f4d5a8708"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.839283 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-config-data" (OuterVolumeSpecName: "config-data") pod "82dc6718-a141-4d1c-83b0-b08f4d5a8708" (UID: "82dc6718-a141-4d1c-83b0-b08f4d5a8708"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.890672 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.891042 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4dqf\" (UniqueName: \"kubernetes.io/projected/82dc6718-a141-4d1c-83b0-b08f4d5a8708-kube-api-access-j4dqf\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:37 crc kubenswrapper[4710]: I1128 17:18:37.891056 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82dc6718-a141-4d1c-83b0-b08f4d5a8708-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.191307 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-rt2kz" event={"ID":"82dc6718-a141-4d1c-83b0-b08f4d5a8708","Type":"ContainerDied","Data":"efdc84cfae7587712f5f5f662dd33d03db88405682e8ad5863b84d2f4c77c616"} Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.191331 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-rt2kz" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.191349 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efdc84cfae7587712f5f5f662dd33d03db88405682e8ad5863b84d2f4c77c616" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.193168 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" event={"ID":"326b2f12-36ee-4772-820e-4f03c5919bd0","Type":"ContainerStarted","Data":"5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce"} Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.193369 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.214715 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" podStartSLOduration=3.2146916 podStartE2EDuration="3.2146916s" podCreationTimestamp="2025-11-28 17:18:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:18:38.209387593 +0000 UTC m=+1207.467687638" watchObservedRunningTime="2025-11-28 17:18:38.2146916 +0000 UTC m=+1207.472991645" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.402394 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-bhnkl"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.476194 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kh4mg"] Nov 28 17:18:38 crc kubenswrapper[4710]: E1128 17:18:38.476848 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cabd31d-a832-4c72-b37b-a6d889378e47" containerName="ovn-config" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.476863 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cabd31d-a832-4c72-b37b-a6d889378e47" containerName="ovn-config" Nov 28 17:18:38 crc kubenswrapper[4710]: E1128 17:18:38.476899 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82dc6718-a141-4d1c-83b0-b08f4d5a8708" containerName="keystone-db-sync" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.476908 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="82dc6718-a141-4d1c-83b0-b08f4d5a8708" containerName="keystone-db-sync" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.477147 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="82dc6718-a141-4d1c-83b0-b08f4d5a8708" containerName="keystone-db-sync" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.477177 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cabd31d-a832-4c72-b37b-a6d889378e47" containerName="ovn-config" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.477994 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.489370 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.489579 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.489693 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.489942 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xmd8n" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.490145 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.497445 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nn89w"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.499191 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.520972 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kh4mg"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.543821 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nn89w"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.607933 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-config\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608014 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608040 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bklp5\" (UniqueName: \"kubernetes.io/projected/c66846c7-2338-4a6e-afe3-02722622b967-kube-api-access-bklp5\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608100 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-scripts\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608333 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-fernet-keys\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608415 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-credential-keys\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608431 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-config-data\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608450 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-combined-ca-bundle\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608587 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-svc\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608726 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608774 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.608854 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t46jm\" (UniqueName: \"kubernetes.io/projected/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-kube-api-access-t46jm\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.639899 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-f2xjj"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.641536 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.645000 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.645296 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7b762" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.645461 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.659248 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-f2xjj"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.711880 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-scripts\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.711963 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-fernet-keys\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712015 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-credential-keys\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712041 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-config-data\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712069 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-combined-ca-bundle\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712120 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-svc\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712179 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712205 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712249 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t46jm\" (UniqueName: \"kubernetes.io/projected/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-kube-api-access-t46jm\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712334 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-config\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712391 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.712418 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bklp5\" (UniqueName: \"kubernetes.io/projected/c66846c7-2338-4a6e-afe3-02722622b967-kube-api-access-bklp5\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.714914 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-svc\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.718442 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.719136 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.726076 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-config\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.726508 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-fernet-keys\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.727512 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.739597 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-scripts\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.741259 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-credential-keys\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.762497 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9mv8x"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.763495 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-combined-ca-bundle\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.764498 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.770989 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-zl7sx"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.772591 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.774471 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-config-data\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.774581 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-c7c6w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.775078 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.776063 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.777036 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rdw8h" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.777244 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.779187 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bklp5\" (UniqueName: \"kubernetes.io/projected/c66846c7-2338-4a6e-afe3-02722622b967-kube-api-access-bklp5\") pod \"keystone-bootstrap-kh4mg\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.781253 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t46jm\" (UniqueName: \"kubernetes.io/projected/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-kube-api-access-t46jm\") pod \"dnsmasq-dns-5b868669f-nn89w\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.798842 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zl7sx"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.813818 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-486tv\" (UniqueName: \"kubernetes.io/projected/eedde5de-ead1-462b-a55f-3473c0f09f43-kube-api-access-486tv\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.813920 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-db-sync-config-data\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.813981 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eedde5de-ead1-462b-a55f-3473c0f09f43-etc-machine-id\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.814087 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-config-data\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.814120 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-combined-ca-bundle\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.814151 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-scripts\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.825717 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9mv8x"] Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.914613 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.915214 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.916325 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-config\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.916473 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drqqg\" (UniqueName: \"kubernetes.io/projected/df0a3540-9534-46cf-8ecd-c32878e75b08-kube-api-access-drqqg\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.916553 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-config-data\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.916621 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-combined-ca-bundle\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.916681 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-db-sync-config-data\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.916747 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-combined-ca-bundle\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.916852 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-scripts\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.916923 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-combined-ca-bundle\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.917020 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjkzd\" (UniqueName: \"kubernetes.io/projected/f03a0db7-fab9-4d77-8f2e-368c122983ca-kube-api-access-zjkzd\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.917113 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-486tv\" (UniqueName: \"kubernetes.io/projected/eedde5de-ead1-462b-a55f-3473c0f09f43-kube-api-access-486tv\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.917264 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-db-sync-config-data\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.917374 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eedde5de-ead1-462b-a55f-3473c0f09f43-etc-machine-id\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.917499 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eedde5de-ead1-462b-a55f-3473c0f09f43-etc-machine-id\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.936116 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-scripts\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.936799 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-config-data\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.937690 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-combined-ca-bundle\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.943299 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-db-sync-config-data\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.965588 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-486tv\" (UniqueName: \"kubernetes.io/projected/eedde5de-ead1-462b-a55f-3473c0f09f43-kube-api-access-486tv\") pod \"cinder-db-sync-f2xjj\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:38 crc kubenswrapper[4710]: I1128 17:18:38.976063 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.019670 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-config\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.019790 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drqqg\" (UniqueName: \"kubernetes.io/projected/df0a3540-9534-46cf-8ecd-c32878e75b08-kube-api-access-drqqg\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.019832 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-combined-ca-bundle\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.019851 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-db-sync-config-data\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.019883 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-combined-ca-bundle\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.019934 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjkzd\" (UniqueName: \"kubernetes.io/projected/f03a0db7-fab9-4d77-8f2e-368c122983ca-kube-api-access-zjkzd\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.029774 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-db-sync-config-data\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.047666 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-combined-ca-bundle\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.050377 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-combined-ca-bundle\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.050718 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-config\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.053892 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.084607 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjkzd\" (UniqueName: \"kubernetes.io/projected/f03a0db7-fab9-4d77-8f2e-368c122983ca-kube-api-access-zjkzd\") pod \"barbican-db-sync-9mv8x\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.108165 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drqqg\" (UniqueName: \"kubernetes.io/projected/df0a3540-9534-46cf-8ecd-c32878e75b08-kube-api-access-drqqg\") pod \"neutron-db-sync-zl7sx\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.109149 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.114365 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.114652 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.207595 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nn89w"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.207638 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.220202 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-n5chx"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.222340 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.230929 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.231199 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.231406 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rtg87" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.231774 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n5chx"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.242255 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xw8td" event={"ID":"a3835d37-f072-4310-a667-a7f398e80ab1","Type":"ContainerStarted","Data":"ee00b88d2fd20227ce434deceb3a2801039dbc78f7fa0413ec0b7e6dc9387ecb"} Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.242305 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-w2dhd"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.249815 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-w2dhd"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.250061 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.265698 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.265748 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-log-httpd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.265788 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.265830 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-scripts\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.265937 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-run-httpd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.265990 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-config-data\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.266045 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmnhd\" (UniqueName: \"kubernetes.io/projected/946b6bdb-75de-4047-a448-fb453e602b7f-kube-api-access-tmnhd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.288034 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xw8td" podStartSLOduration=3.918431198 podStartE2EDuration="34.288013275s" podCreationTimestamp="2025-11-28 17:18:05 +0000 UTC" firstStartedPulling="2025-11-28 17:18:07.438789694 +0000 UTC m=+1176.697089739" lastFinishedPulling="2025-11-28 17:18:37.808371771 +0000 UTC m=+1207.066671816" observedRunningTime="2025-11-28 17:18:39.274031676 +0000 UTC m=+1208.532331721" watchObservedRunningTime="2025-11-28 17:18:39.288013275 +0000 UTC m=+1208.546313320" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.352352 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375549 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-scripts\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375625 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375666 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhrm\" (UniqueName: \"kubernetes.io/projected/cab500d6-0a90-45c1-b760-53db118834a3-kube-api-access-gdhrm\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375717 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-config-data\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375748 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-config\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375855 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375886 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmnhd\" (UniqueName: \"kubernetes.io/projected/946b6bdb-75de-4047-a448-fb453e602b7f-kube-api-access-tmnhd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375943 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbnhf\" (UniqueName: \"kubernetes.io/projected/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-kube-api-access-gbnhf\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375970 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-svc\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.375999 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376019 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-combined-ca-bundle\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376040 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-log-httpd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376081 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376139 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-config-data\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376171 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-scripts\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376193 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376294 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab500d6-0a90-45c1-b760-53db118834a3-logs\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376356 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-run-httpd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.376859 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-run-httpd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.377916 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-log-httpd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.383543 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-config-data\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.385388 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.388657 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.391034 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-scripts\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.394904 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmnhd\" (UniqueName: \"kubernetes.io/projected/946b6bdb-75de-4047-a448-fb453e602b7f-kube-api-access-tmnhd\") pod \"ceilometer-0\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.402901 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.467238 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479583 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-combined-ca-bundle\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479668 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-config-data\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479705 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479778 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab500d6-0a90-45c1-b760-53db118834a3-logs\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479830 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-scripts\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479861 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479886 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdhrm\" (UniqueName: \"kubernetes.io/projected/cab500d6-0a90-45c1-b760-53db118834a3-kube-api-access-gdhrm\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479932 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-config\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.479984 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.480032 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbnhf\" (UniqueName: \"kubernetes.io/projected/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-kube-api-access-gbnhf\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.480059 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-svc\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.480964 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-svc\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.483081 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.483121 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-config\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.484036 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.484121 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab500d6-0a90-45c1-b760-53db118834a3-logs\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.484587 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.540697 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbnhf\" (UniqueName: \"kubernetes.io/projected/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-kube-api-access-gbnhf\") pod \"dnsmasq-dns-cf78879c9-w2dhd\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.579277 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.608285 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdhrm\" (UniqueName: \"kubernetes.io/projected/cab500d6-0a90-45c1-b760-53db118834a3-kube-api-access-gdhrm\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.608536 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-scripts\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.617408 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-config-data\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.618332 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-combined-ca-bundle\") pod \"placement-db-sync-n5chx\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.653493 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nn89w"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.671143 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kh4mg"] Nov 28 17:18:39 crc kubenswrapper[4710]: I1128 17:18:39.869819 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n5chx" Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.089516 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-f2xjj"] Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.266621 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9mv8x"] Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.270400 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f2xjj" event={"ID":"eedde5de-ead1-462b-a55f-3473c0f09f43","Type":"ContainerStarted","Data":"a936ac67d0bf036a9717cecd0a769101105ea7ce3fb97995f80338706ea50126"} Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.273018 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kh4mg" event={"ID":"c66846c7-2338-4a6e-afe3-02722622b967","Type":"ContainerStarted","Data":"1e104a94e96fd6e373c1e5a2cf49ff3a0548c868aee72bf03019b9e0ee881603"} Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.273070 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kh4mg" event={"ID":"c66846c7-2338-4a6e-afe3-02722622b967","Type":"ContainerStarted","Data":"15e1499cddf4cb71ba91b0193143dc8ae5bbba291e88bcaac26b5dc3ccc0f71d"} Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.274793 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b868669f-nn89w" podUID="e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" containerName="init" containerID="cri-o://a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f" gracePeriod=10 Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.274910 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" podUID="326b2f12-36ee-4772-820e-4f03c5919bd0" containerName="dnsmasq-dns" containerID="cri-o://5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce" gracePeriod=10 Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.275012 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-nn89w" event={"ID":"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1","Type":"ContainerStarted","Data":"8960d6673e67dee02e2b9893c9ac2c13389a5c270442a16753a24d25a9b33432"} Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.311588 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kh4mg" podStartSLOduration=2.311561557 podStartE2EDuration="2.311561557s" podCreationTimestamp="2025-11-28 17:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:18:40.302817373 +0000 UTC m=+1209.561117408" watchObservedRunningTime="2025-11-28 17:18:40.311561557 +0000 UTC m=+1209.569861612" Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.729275 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-w2dhd"] Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.737157 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.756471 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-zl7sx"] Nov 28 17:18:40 crc kubenswrapper[4710]: I1128 17:18:40.777928 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n5chx"] Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.033390 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.046907 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.130609 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-sb\") pod \"326b2f12-36ee-4772-820e-4f03c5919bd0\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.131216 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-config\") pod \"326b2f12-36ee-4772-820e-4f03c5919bd0\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.131256 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xll6b\" (UniqueName: \"kubernetes.io/projected/326b2f12-36ee-4772-820e-4f03c5919bd0-kube-api-access-xll6b\") pod \"326b2f12-36ee-4772-820e-4f03c5919bd0\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.131288 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-svc\") pod \"326b2f12-36ee-4772-820e-4f03c5919bd0\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.131404 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-nb\") pod \"326b2f12-36ee-4772-820e-4f03c5919bd0\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.131428 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-swift-storage-0\") pod \"326b2f12-36ee-4772-820e-4f03c5919bd0\" (UID: \"326b2f12-36ee-4772-820e-4f03c5919bd0\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.185018 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/326b2f12-36ee-4772-820e-4f03c5919bd0-kube-api-access-xll6b" (OuterVolumeSpecName: "kube-api-access-xll6b") pod "326b2f12-36ee-4772-820e-4f03c5919bd0" (UID: "326b2f12-36ee-4772-820e-4f03c5919bd0"). InnerVolumeSpecName "kube-api-access-xll6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.203988 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "326b2f12-36ee-4772-820e-4f03c5919bd0" (UID: "326b2f12-36ee-4772-820e-4f03c5919bd0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.228529 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-config" (OuterVolumeSpecName: "config") pod "326b2f12-36ee-4772-820e-4f03c5919bd0" (UID: "326b2f12-36ee-4772-820e-4f03c5919bd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.231119 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "326b2f12-36ee-4772-820e-4f03c5919bd0" (UID: "326b2f12-36ee-4772-820e-4f03c5919bd0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.235848 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t46jm\" (UniqueName: \"kubernetes.io/projected/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-kube-api-access-t46jm\") pod \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.235912 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-svc\") pod \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.236633 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-swift-storage-0\") pod \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.236710 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-nb\") pod \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.236811 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-config\") pod \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.236859 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-sb\") pod \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\" (UID: \"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1\") " Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.237606 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.237630 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.237640 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xll6b\" (UniqueName: \"kubernetes.io/projected/326b2f12-36ee-4772-820e-4f03c5919bd0-kube-api-access-xll6b\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.237650 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.239727 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-kube-api-access-t46jm" (OuterVolumeSpecName: "kube-api-access-t46jm") pod "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" (UID: "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1"). InnerVolumeSpecName "kube-api-access-t46jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.273303 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "326b2f12-36ee-4772-820e-4f03c5919bd0" (UID: "326b2f12-36ee-4772-820e-4f03c5919bd0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.279369 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" (UID: "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.289506 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" (UID: "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.296767 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9mv8x" event={"ID":"f03a0db7-fab9-4d77-8f2e-368c122983ca","Type":"ContainerStarted","Data":"d319cc4472a517e8625772417e26aea344031ecda3aeb75d0b334d9bb4098414"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.298830 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n5chx" event={"ID":"cab500d6-0a90-45c1-b760-53db118834a3","Type":"ContainerStarted","Data":"136b163ad420566f7112250ea018392fd3e1a9c2eeefa16da4595e9177947635"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.301105 4710 generic.go:334] "Generic (PLEG): container finished" podID="326b2f12-36ee-4772-820e-4f03c5919bd0" containerID="5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce" exitCode=0 Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.301167 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" event={"ID":"326b2f12-36ee-4772-820e-4f03c5919bd0","Type":"ContainerDied","Data":"5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.301193 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" event={"ID":"326b2f12-36ee-4772-820e-4f03c5919bd0","Type":"ContainerDied","Data":"ebc370a3eb04c40e7cd82803c741e3e7e22204b79411b2c43022f9a57cbded1b"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.301217 4710 scope.go:117] "RemoveContainer" containerID="5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.301346 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-bhnkl" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.304467 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" (UID: "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.304846 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "326b2f12-36ee-4772-820e-4f03c5919bd0" (UID: "326b2f12-36ee-4772-820e-4f03c5919bd0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.310092 4710 generic.go:334] "Generic (PLEG): container finished" podID="e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" containerID="a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f" exitCode=0 Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.310143 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-nn89w" event={"ID":"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1","Type":"ContainerDied","Data":"a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.310164 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-nn89w" event={"ID":"e957ae99-9cfc-4310-ba0b-3ff300cbf1b1","Type":"ContainerDied","Data":"8960d6673e67dee02e2b9893c9ac2c13389a5c270442a16753a24d25a9b33432"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.310217 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-nn89w" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.311073 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-config" (OuterVolumeSpecName: "config") pod "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" (UID: "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.324025 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerStarted","Data":"735a302e446a7c8c4bdd569941fb04ba088b11abfeee7c1fffd75acb5fadf71c"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.326479 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" (UID: "e957ae99-9cfc-4310-ba0b-3ff300cbf1b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.331518 4710 generic.go:334] "Generic (PLEG): container finished" podID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" containerID="b131a1072266e75edd164fc161a13921ed902b113c8b980e40744f6c93d389d8" exitCode=0 Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.331593 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" event={"ID":"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a","Type":"ContainerDied","Data":"b131a1072266e75edd164fc161a13921ed902b113c8b980e40744f6c93d389d8"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.331617 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" event={"ID":"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a","Type":"ContainerStarted","Data":"4cc7350e48608ceb5fb99ca94e32fda84d43348dabd2a3529a15584edeb04871"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.348077 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t46jm\" (UniqueName: \"kubernetes.io/projected/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-kube-api-access-t46jm\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.348130 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.348141 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.348150 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.348159 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.348169 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.348273 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.348283 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/326b2f12-36ee-4772-820e-4f03c5919bd0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.369884 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zl7sx" event={"ID":"df0a3540-9534-46cf-8ecd-c32878e75b08","Type":"ContainerStarted","Data":"f29664a6a5bf62a66f20f2c248f0af3bf4caaba8bf83feafdbfd1f78f62e8fb0"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.369954 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zl7sx" event={"ID":"df0a3540-9534-46cf-8ecd-c32878e75b08","Type":"ContainerStarted","Data":"e7a442ca39b0a09655c3268cf3999406a4ab4f32a64180ef3a59af32abb670b3"} Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.386575 4710 scope.go:117] "RemoveContainer" containerID="0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.449314 4710 scope.go:117] "RemoveContainer" containerID="5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.449350 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-zl7sx" podStartSLOduration=3.449333956 podStartE2EDuration="3.449333956s" podCreationTimestamp="2025-11-28 17:18:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:18:41.386628707 +0000 UTC m=+1210.644928762" watchObservedRunningTime="2025-11-28 17:18:41.449333956 +0000 UTC m=+1210.707634001" Nov 28 17:18:41 crc kubenswrapper[4710]: E1128 17:18:41.457618 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce\": container with ID starting with 5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce not found: ID does not exist" containerID="5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.457676 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce"} err="failed to get container status \"5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce\": rpc error: code = NotFound desc = could not find container \"5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce\": container with ID starting with 5fcd7737e2b53c17b7662f10baba6005ec040f0e97ae250f59344afbc74f5fce not found: ID does not exist" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.457703 4710 scope.go:117] "RemoveContainer" containerID="0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820" Nov 28 17:18:41 crc kubenswrapper[4710]: E1128 17:18:41.461331 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820\": container with ID starting with 0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820 not found: ID does not exist" containerID="0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.461483 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820"} err="failed to get container status \"0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820\": rpc error: code = NotFound desc = could not find container \"0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820\": container with ID starting with 0a9e69227dfbc288b37151ceb6b3bbf98725528adbbebc7e5782ab64e3227820 not found: ID does not exist" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.462025 4710 scope.go:117] "RemoveContainer" containerID="a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.523798 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.607667 4710 scope.go:117] "RemoveContainer" containerID="a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f" Nov 28 17:18:41 crc kubenswrapper[4710]: E1128 17:18:41.608370 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f\": container with ID starting with a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f not found: ID does not exist" containerID="a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.608406 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f"} err="failed to get container status \"a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f\": rpc error: code = NotFound desc = could not find container \"a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f\": container with ID starting with a6bf515c296c299077f871980dcf6baece59f8182c5a6ba6145f4043d073943f not found: ID does not exist" Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.654968 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-bhnkl"] Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.659555 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-bhnkl"] Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.743066 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nn89w"] Nov 28 17:18:41 crc kubenswrapper[4710]: I1128 17:18:41.774142 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-nn89w"] Nov 28 17:18:42 crc kubenswrapper[4710]: I1128 17:18:42.400928 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" event={"ID":"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a","Type":"ContainerStarted","Data":"0c3e977d07f16304038ca3631b473fd5771aab94460313f05e917c49bfdc1c79"} Nov 28 17:18:42 crc kubenswrapper[4710]: I1128 17:18:42.401236 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:42 crc kubenswrapper[4710]: I1128 17:18:42.431820 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" podStartSLOduration=3.431800967 podStartE2EDuration="3.431800967s" podCreationTimestamp="2025-11-28 17:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:18:42.420614946 +0000 UTC m=+1211.678915001" watchObservedRunningTime="2025-11-28 17:18:42.431800967 +0000 UTC m=+1211.690101012" Nov 28 17:18:43 crc kubenswrapper[4710]: I1128 17:18:43.154951 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="326b2f12-36ee-4772-820e-4f03c5919bd0" path="/var/lib/kubelet/pods/326b2f12-36ee-4772-820e-4f03c5919bd0/volumes" Nov 28 17:18:43 crc kubenswrapper[4710]: I1128 17:18:43.155716 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" path="/var/lib/kubelet/pods/e957ae99-9cfc-4310-ba0b-3ff300cbf1b1/volumes" Nov 28 17:18:45 crc kubenswrapper[4710]: I1128 17:18:45.434961 4710 generic.go:334] "Generic (PLEG): container finished" podID="c66846c7-2338-4a6e-afe3-02722622b967" containerID="1e104a94e96fd6e373c1e5a2cf49ff3a0548c868aee72bf03019b9e0ee881603" exitCode=0 Nov 28 17:18:45 crc kubenswrapper[4710]: I1128 17:18:45.435037 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kh4mg" event={"ID":"c66846c7-2338-4a6e-afe3-02722622b967","Type":"ContainerDied","Data":"1e104a94e96fd6e373c1e5a2cf49ff3a0548c868aee72bf03019b9e0ee881603"} Nov 28 17:18:49 crc kubenswrapper[4710]: I1128 17:18:49.580896 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:18:49 crc kubenswrapper[4710]: I1128 17:18:49.657037 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fkpk9"] Nov 28 17:18:49 crc kubenswrapper[4710]: I1128 17:18:49.657613 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="dnsmasq-dns" containerID="cri-o://7d9ecdaf3372577fdecf4e222b5356fdf79070cdb0a3eae03e648bd79e503c11" gracePeriod=10 Nov 28 17:18:50 crc kubenswrapper[4710]: I1128 17:18:50.503087 4710 generic.go:334] "Generic (PLEG): container finished" podID="0242e508-bdc7-4611-92f2-6df38d51821c" containerID="7d9ecdaf3372577fdecf4e222b5356fdf79070cdb0a3eae03e648bd79e503c11" exitCode=0 Nov 28 17:18:50 crc kubenswrapper[4710]: I1128 17:18:50.503141 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" event={"ID":"0242e508-bdc7-4611-92f2-6df38d51821c","Type":"ContainerDied","Data":"7d9ecdaf3372577fdecf4e222b5356fdf79070cdb0a3eae03e648bd79e503c11"} Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.103130 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.295969 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-credential-keys\") pod \"c66846c7-2338-4a6e-afe3-02722622b967\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.296036 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-scripts\") pod \"c66846c7-2338-4a6e-afe3-02722622b967\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.296148 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bklp5\" (UniqueName: \"kubernetes.io/projected/c66846c7-2338-4a6e-afe3-02722622b967-kube-api-access-bklp5\") pod \"c66846c7-2338-4a6e-afe3-02722622b967\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.296176 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-combined-ca-bundle\") pod \"c66846c7-2338-4a6e-afe3-02722622b967\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.296266 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-fernet-keys\") pod \"c66846c7-2338-4a6e-afe3-02722622b967\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.296285 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-config-data\") pod \"c66846c7-2338-4a6e-afe3-02722622b967\" (UID: \"c66846c7-2338-4a6e-afe3-02722622b967\") " Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.302619 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c66846c7-2338-4a6e-afe3-02722622b967" (UID: "c66846c7-2338-4a6e-afe3-02722622b967"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.303926 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-scripts" (OuterVolumeSpecName: "scripts") pod "c66846c7-2338-4a6e-afe3-02722622b967" (UID: "c66846c7-2338-4a6e-afe3-02722622b967"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.310302 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c66846c7-2338-4a6e-afe3-02722622b967" (UID: "c66846c7-2338-4a6e-afe3-02722622b967"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.310386 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c66846c7-2338-4a6e-afe3-02722622b967-kube-api-access-bklp5" (OuterVolumeSpecName: "kube-api-access-bklp5") pod "c66846c7-2338-4a6e-afe3-02722622b967" (UID: "c66846c7-2338-4a6e-afe3-02722622b967"). InnerVolumeSpecName "kube-api-access-bklp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.339987 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c66846c7-2338-4a6e-afe3-02722622b967" (UID: "c66846c7-2338-4a6e-afe3-02722622b967"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.360083 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-config-data" (OuterVolumeSpecName: "config-data") pod "c66846c7-2338-4a6e-afe3-02722622b967" (UID: "c66846c7-2338-4a6e-afe3-02722622b967"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.398480 4710 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.398610 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.398671 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bklp5\" (UniqueName: \"kubernetes.io/projected/c66846c7-2338-4a6e-afe3-02722622b967-kube-api-access-bklp5\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.398730 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.398822 4710 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.398878 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c66846c7-2338-4a6e-afe3-02722622b967-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.513492 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kh4mg" event={"ID":"c66846c7-2338-4a6e-afe3-02722622b967","Type":"ContainerDied","Data":"15e1499cddf4cb71ba91b0193143dc8ae5bbba291e88bcaac26b5dc3ccc0f71d"} Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.513538 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e1499cddf4cb71ba91b0193143dc8ae5bbba291e88bcaac26b5dc3ccc0f71d" Nov 28 17:18:51 crc kubenswrapper[4710]: I1128 17:18:51.513539 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kh4mg" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.220295 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kh4mg"] Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.232967 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kh4mg"] Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.235536 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.333965 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mtmgk"] Nov 28 17:18:52 crc kubenswrapper[4710]: E1128 17:18:52.334877 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326b2f12-36ee-4772-820e-4f03c5919bd0" containerName="init" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.334987 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="326b2f12-36ee-4772-820e-4f03c5919bd0" containerName="init" Nov 28 17:18:52 crc kubenswrapper[4710]: E1128 17:18:52.335066 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="326b2f12-36ee-4772-820e-4f03c5919bd0" containerName="dnsmasq-dns" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.335124 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="326b2f12-36ee-4772-820e-4f03c5919bd0" containerName="dnsmasq-dns" Nov 28 17:18:52 crc kubenswrapper[4710]: E1128 17:18:52.335228 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c66846c7-2338-4a6e-afe3-02722622b967" containerName="keystone-bootstrap" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.335287 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="c66846c7-2338-4a6e-afe3-02722622b967" containerName="keystone-bootstrap" Nov 28 17:18:52 crc kubenswrapper[4710]: E1128 17:18:52.335350 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" containerName="init" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.335400 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" containerName="init" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.335866 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="c66846c7-2338-4a6e-afe3-02722622b967" containerName="keystone-bootstrap" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.335952 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="326b2f12-36ee-4772-820e-4f03c5919bd0" containerName="dnsmasq-dns" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.336012 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e957ae99-9cfc-4310-ba0b-3ff300cbf1b1" containerName="init" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.340677 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.345898 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.346193 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.346303 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.346404 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xmd8n" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.346511 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.350261 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mtmgk"] Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.524153 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-credential-keys\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.524246 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-combined-ca-bundle\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.524282 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-config-data\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.524441 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz8rr\" (UniqueName: \"kubernetes.io/projected/0502e48f-0338-42fa-9403-e87c11997261-kube-api-access-mz8rr\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.524516 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-fernet-keys\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.524569 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-scripts\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.525598 4710 generic.go:334] "Generic (PLEG): container finished" podID="a3835d37-f072-4310-a667-a7f398e80ab1" containerID="ee00b88d2fd20227ce434deceb3a2801039dbc78f7fa0413ec0b7e6dc9387ecb" exitCode=0 Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.525637 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xw8td" event={"ID":"a3835d37-f072-4310-a667-a7f398e80ab1","Type":"ContainerDied","Data":"ee00b88d2fd20227ce434deceb3a2801039dbc78f7fa0413ec0b7e6dc9387ecb"} Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.631107 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-credential-keys\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.631224 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-combined-ca-bundle\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.631255 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-config-data\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.631346 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz8rr\" (UniqueName: \"kubernetes.io/projected/0502e48f-0338-42fa-9403-e87c11997261-kube-api-access-mz8rr\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.631405 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-fernet-keys\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.631460 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-scripts\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.636300 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-combined-ca-bundle\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.636346 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-credential-keys\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.636955 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-scripts\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.637525 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-config-data\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.645859 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-fernet-keys\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.648595 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz8rr\" (UniqueName: \"kubernetes.io/projected/0502e48f-0338-42fa-9403-e87c11997261-kube-api-access-mz8rr\") pod \"keystone-bootstrap-mtmgk\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:52 crc kubenswrapper[4710]: I1128 17:18:52.665661 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:18:53 crc kubenswrapper[4710]: I1128 17:18:53.169685 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c66846c7-2338-4a6e-afe3-02722622b967" path="/var/lib/kubelet/pods/c66846c7-2338-4a6e-afe3-02722622b967/volumes" Nov 28 17:18:57 crc kubenswrapper[4710]: I1128 17:18:57.235471 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Nov 28 17:19:02 crc kubenswrapper[4710]: I1128 17:19:02.235487 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Nov 28 17:19:02 crc kubenswrapper[4710]: I1128 17:19:02.235867 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:19:02 crc kubenswrapper[4710]: E1128 17:19:02.606256 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 28 17:19:02 crc kubenswrapper[4710]: E1128 17:19:02.606425 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjkzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-9mv8x_openstack(f03a0db7-fab9-4d77-8f2e-368c122983ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:02 crc kubenswrapper[4710]: E1128 17:19:02.608200 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-9mv8x" podUID="f03a0db7-fab9-4d77-8f2e-368c122983ca" Nov 28 17:19:02 crc kubenswrapper[4710]: E1128 17:19:02.641674 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-9mv8x" podUID="f03a0db7-fab9-4d77-8f2e-368c122983ca" Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.659835 4710 generic.go:334] "Generic (PLEG): container finished" podID="df0a3540-9534-46cf-8ecd-c32878e75b08" containerID="f29664a6a5bf62a66f20f2c248f0af3bf4caaba8bf83feafdbfd1f78f62e8fb0" exitCode=0 Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.659930 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zl7sx" event={"ID":"df0a3540-9534-46cf-8ecd-c32878e75b08","Type":"ContainerDied","Data":"f29664a6a5bf62a66f20f2c248f0af3bf4caaba8bf83feafdbfd1f78f62e8fb0"} Nov 28 17:19:03 crc kubenswrapper[4710]: E1128 17:19:03.863470 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 28 17:19:03 crc kubenswrapper[4710]: E1128 17:19:03.863682 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-486tv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-f2xjj_openstack(eedde5de-ead1-462b-a55f-3473c0f09f43): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:19:03 crc kubenswrapper[4710]: E1128 17:19:03.865329 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-f2xjj" podUID="eedde5de-ead1-462b-a55f-3473c0f09f43" Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.911677 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xw8td" Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.966581 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-db-sync-config-data\") pod \"a3835d37-f072-4310-a667-a7f398e80ab1\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.966969 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59bts\" (UniqueName: \"kubernetes.io/projected/a3835d37-f072-4310-a667-a7f398e80ab1-kube-api-access-59bts\") pod \"a3835d37-f072-4310-a667-a7f398e80ab1\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.967003 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-config-data\") pod \"a3835d37-f072-4310-a667-a7f398e80ab1\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.967133 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-combined-ca-bundle\") pod \"a3835d37-f072-4310-a667-a7f398e80ab1\" (UID: \"a3835d37-f072-4310-a667-a7f398e80ab1\") " Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.985880 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a3835d37-f072-4310-a667-a7f398e80ab1" (UID: "a3835d37-f072-4310-a667-a7f398e80ab1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:03 crc kubenswrapper[4710]: I1128 17:19:03.990949 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3835d37-f072-4310-a667-a7f398e80ab1-kube-api-access-59bts" (OuterVolumeSpecName: "kube-api-access-59bts") pod "a3835d37-f072-4310-a667-a7f398e80ab1" (UID: "a3835d37-f072-4310-a667-a7f398e80ab1"). InnerVolumeSpecName "kube-api-access-59bts". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.044154 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3835d37-f072-4310-a667-a7f398e80ab1" (UID: "a3835d37-f072-4310-a667-a7f398e80ab1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.067974 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-config-data" (OuterVolumeSpecName: "config-data") pod "a3835d37-f072-4310-a667-a7f398e80ab1" (UID: "a3835d37-f072-4310-a667-a7f398e80ab1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.068508 4710 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.068541 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59bts\" (UniqueName: \"kubernetes.io/projected/a3835d37-f072-4310-a667-a7f398e80ab1-kube-api-access-59bts\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.068551 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.068559 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3835d37-f072-4310-a667-a7f398e80ab1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.114110 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.273285 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-dns-svc\") pod \"0242e508-bdc7-4611-92f2-6df38d51821c\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.273682 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-sb\") pod \"0242e508-bdc7-4611-92f2-6df38d51821c\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.273876 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dq4nn\" (UniqueName: \"kubernetes.io/projected/0242e508-bdc7-4611-92f2-6df38d51821c-kube-api-access-dq4nn\") pod \"0242e508-bdc7-4611-92f2-6df38d51821c\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.273909 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-nb\") pod \"0242e508-bdc7-4611-92f2-6df38d51821c\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.273925 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-config\") pod \"0242e508-bdc7-4611-92f2-6df38d51821c\" (UID: \"0242e508-bdc7-4611-92f2-6df38d51821c\") " Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.279626 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0242e508-bdc7-4611-92f2-6df38d51821c-kube-api-access-dq4nn" (OuterVolumeSpecName: "kube-api-access-dq4nn") pod "0242e508-bdc7-4611-92f2-6df38d51821c" (UID: "0242e508-bdc7-4611-92f2-6df38d51821c"). InnerVolumeSpecName "kube-api-access-dq4nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.323371 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0242e508-bdc7-4611-92f2-6df38d51821c" (UID: "0242e508-bdc7-4611-92f2-6df38d51821c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.324374 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0242e508-bdc7-4611-92f2-6df38d51821c" (UID: "0242e508-bdc7-4611-92f2-6df38d51821c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.326153 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0242e508-bdc7-4611-92f2-6df38d51821c" (UID: "0242e508-bdc7-4611-92f2-6df38d51821c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.329622 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mtmgk"] Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.343829 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-config" (OuterVolumeSpecName: "config") pod "0242e508-bdc7-4611-92f2-6df38d51821c" (UID: "0242e508-bdc7-4611-92f2-6df38d51821c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.376563 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.376607 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dq4nn\" (UniqueName: \"kubernetes.io/projected/0242e508-bdc7-4611-92f2-6df38d51821c-kube-api-access-dq4nn\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.376621 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.376634 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.376646 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0242e508-bdc7-4611-92f2-6df38d51821c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.676676 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n5chx" event={"ID":"cab500d6-0a90-45c1-b760-53db118834a3","Type":"ContainerStarted","Data":"10f6124b673a813aceb84e9ef92ced2a7ba126aa788aff51c77d30ac183cac24"} Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.687246 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerStarted","Data":"41fbf3acdb877076b8bbd2b71856051d28dfd2a86b1063d6888f70c29b5b1900"} Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.692286 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" event={"ID":"0242e508-bdc7-4611-92f2-6df38d51821c","Type":"ContainerDied","Data":"c9f0c5c5d5028e7766a57c305a4a838c13d7fa9717336163109a605654cf74fb"} Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.692330 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-fkpk9" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.692337 4710 scope.go:117] "RemoveContainer" containerID="7d9ecdaf3372577fdecf4e222b5356fdf79070cdb0a3eae03e648bd79e503c11" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.700017 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mtmgk" event={"ID":"0502e48f-0338-42fa-9403-e87c11997261","Type":"ContainerStarted","Data":"2a2518cb61eda9edc870303286bc6c255c0b39265f87554c7f3078eb3c5546c3"} Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.700066 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mtmgk" event={"ID":"0502e48f-0338-42fa-9403-e87c11997261","Type":"ContainerStarted","Data":"93bd3eaa5d9c28008520db735242c208de69c1e7f8c7927dc29d57a17d8fbed4"} Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.702483 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xw8td" event={"ID":"a3835d37-f072-4310-a667-a7f398e80ab1","Type":"ContainerDied","Data":"ee851e596912c0e0ce6c40356df667a32a06376b24c10438ef2ac18e415270b4"} Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.702565 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee851e596912c0e0ce6c40356df667a32a06376b24c10438ef2ac18e415270b4" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.702680 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xw8td" Nov 28 17:19:04 crc kubenswrapper[4710]: E1128 17:19:04.707933 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-f2xjj" podUID="eedde5de-ead1-462b-a55f-3473c0f09f43" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.715916 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-n5chx" podStartSLOduration=3.688201447 podStartE2EDuration="26.715890822s" podCreationTimestamp="2025-11-28 17:18:38 +0000 UTC" firstStartedPulling="2025-11-28 17:18:40.801123311 +0000 UTC m=+1210.059423356" lastFinishedPulling="2025-11-28 17:19:03.828812686 +0000 UTC m=+1233.087112731" observedRunningTime="2025-11-28 17:19:04.692521539 +0000 UTC m=+1233.950821584" watchObservedRunningTime="2025-11-28 17:19:04.715890822 +0000 UTC m=+1233.974190867" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.735622 4710 scope.go:117] "RemoveContainer" containerID="7bb3a1ae4ed009d9bd292647e1e1e68979c272976e21d90e9ed5d2a06b146c09" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.736719 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mtmgk" podStartSLOduration=12.736696796 podStartE2EDuration="12.736696796s" podCreationTimestamp="2025-11-28 17:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:04.720955372 +0000 UTC m=+1233.979255417" watchObservedRunningTime="2025-11-28 17:19:04.736696796 +0000 UTC m=+1233.994996841" Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.786660 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fkpk9"] Nov 28 17:19:04 crc kubenswrapper[4710]: I1128 17:19:04.808456 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-fkpk9"] Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.109120 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.165817 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" path="/var/lib/kubelet/pods/0242e508-bdc7-4611-92f2-6df38d51821c/volumes" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.295371 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drqqg\" (UniqueName: \"kubernetes.io/projected/df0a3540-9534-46cf-8ecd-c32878e75b08-kube-api-access-drqqg\") pod \"df0a3540-9534-46cf-8ecd-c32878e75b08\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.295573 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-config\") pod \"df0a3540-9534-46cf-8ecd-c32878e75b08\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.295603 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-combined-ca-bundle\") pod \"df0a3540-9534-46cf-8ecd-c32878e75b08\" (UID: \"df0a3540-9534-46cf-8ecd-c32878e75b08\") " Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.303967 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0a3540-9534-46cf-8ecd-c32878e75b08-kube-api-access-drqqg" (OuterVolumeSpecName: "kube-api-access-drqqg") pod "df0a3540-9534-46cf-8ecd-c32878e75b08" (UID: "df0a3540-9534-46cf-8ecd-c32878e75b08"). InnerVolumeSpecName "kube-api-access-drqqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.354927 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df0a3540-9534-46cf-8ecd-c32878e75b08" (UID: "df0a3540-9534-46cf-8ecd-c32878e75b08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.361928 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-config" (OuterVolumeSpecName: "config") pod "df0a3540-9534-46cf-8ecd-c32878e75b08" (UID: "df0a3540-9534-46cf-8ecd-c32878e75b08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.396816 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-42tmg"] Nov 28 17:19:05 crc kubenswrapper[4710]: E1128 17:19:05.397253 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0a3540-9534-46cf-8ecd-c32878e75b08" containerName="neutron-db-sync" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.397265 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0a3540-9534-46cf-8ecd-c32878e75b08" containerName="neutron-db-sync" Nov 28 17:19:05 crc kubenswrapper[4710]: E1128 17:19:05.397283 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="dnsmasq-dns" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.397289 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="dnsmasq-dns" Nov 28 17:19:05 crc kubenswrapper[4710]: E1128 17:19:05.397301 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="init" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.397308 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="init" Nov 28 17:19:05 crc kubenswrapper[4710]: E1128 17:19:05.397321 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3835d37-f072-4310-a667-a7f398e80ab1" containerName="glance-db-sync" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.397326 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3835d37-f072-4310-a667-a7f398e80ab1" containerName="glance-db-sync" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.397517 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="0242e508-bdc7-4611-92f2-6df38d51821c" containerName="dnsmasq-dns" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.397539 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0a3540-9534-46cf-8ecd-c32878e75b08" containerName="neutron-db-sync" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.397556 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3835d37-f072-4310-a667-a7f398e80ab1" containerName="glance-db-sync" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.398530 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.398919 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drqqg\" (UniqueName: \"kubernetes.io/projected/df0a3540-9534-46cf-8ecd-c32878e75b08-kube-api-access-drqqg\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.398956 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.398973 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df0a3540-9534-46cf-8ecd-c32878e75b08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.439122 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-42tmg"] Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.505365 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.505484 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v9hm\" (UniqueName: \"kubernetes.io/projected/d3dcaf00-e307-4c65-9609-917193115f81-kube-api-access-5v9hm\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.505527 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.505567 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.505657 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-config\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.505685 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.607594 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-config\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.608644 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.608591 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-config\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.609303 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.609445 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.610040 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.611046 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v9hm\" (UniqueName: \"kubernetes.io/projected/d3dcaf00-e307-4c65-9609-917193115f81-kube-api-access-5v9hm\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.611118 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.611161 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.611857 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.612071 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.628283 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v9hm\" (UniqueName: \"kubernetes.io/projected/d3dcaf00-e307-4c65-9609-917193115f81-kube-api-access-5v9hm\") pod \"dnsmasq-dns-56df8fb6b7-42tmg\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.723438 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-zl7sx" event={"ID":"df0a3540-9534-46cf-8ecd-c32878e75b08","Type":"ContainerDied","Data":"e7a442ca39b0a09655c3268cf3999406a4ab4f32a64180ef3a59af32abb670b3"} Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.723484 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7a442ca39b0a09655c3268cf3999406a4ab4f32a64180ef3a59af32abb670b3" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.723528 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-zl7sx" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.744444 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.855064 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-42tmg"] Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.904034 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2w5q5"] Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.909085 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.928590 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.928661 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-svc\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.928774 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.928811 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.928863 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-config\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.928910 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vj8c\" (UniqueName: \"kubernetes.io/projected/94dc04af-7548-418b-ac27-1d7cf67a4501-kube-api-access-5vj8c\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:05 crc kubenswrapper[4710]: I1128 17:19:05.995312 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2w5q5"] Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.030632 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.030686 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-svc\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.030782 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.030807 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.030851 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-config\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.030882 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vj8c\" (UniqueName: \"kubernetes.io/projected/94dc04af-7548-418b-ac27-1d7cf67a4501-kube-api-access-5vj8c\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.032189 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.032908 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-svc\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.033590 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.034507 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-config\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.039344 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.080688 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vj8c\" (UniqueName: \"kubernetes.io/projected/94dc04af-7548-418b-ac27-1d7cf67a4501-kube-api-access-5vj8c\") pod \"dnsmasq-dns-6b7b667979-2w5q5\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.213571 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58777d5fd4-xrcjb"] Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.234953 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.237113 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh28s\" (UniqueName: \"kubernetes.io/projected/18baf4b3-8f80-42fa-8291-377b5ae88a92-kube-api-access-mh28s\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.237143 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-ovndb-tls-certs\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.237166 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-config\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.237231 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-combined-ca-bundle\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.237403 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-httpd-config\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.242256 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.242463 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.242625 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-rdw8h" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.242890 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.250382 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.253334 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58777d5fd4-xrcjb"] Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.346888 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh28s\" (UniqueName: \"kubernetes.io/projected/18baf4b3-8f80-42fa-8291-377b5ae88a92-kube-api-access-mh28s\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.347354 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-ovndb-tls-certs\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.347415 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-config\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.347515 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-combined-ca-bundle\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.347594 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.347936 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-httpd-config\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.355702 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-config\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.356885 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-ovndb-tls-certs\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.358073 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-combined-ca-bundle\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.358993 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-httpd-config\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.361449 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.365348 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.365948 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.366128 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-xds75" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.377691 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.403044 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh28s\" (UniqueName: \"kubernetes.io/projected/18baf4b3-8f80-42fa-8291-377b5ae88a92-kube-api-access-mh28s\") pod \"neutron-58777d5fd4-xrcjb\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.531663 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.534922 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.539266 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.557714 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.557771 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.557812 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shm9j\" (UniqueName: \"kubernetes.io/projected/fd1cadac-4227-4d3a-9d90-630dfa496fe6-kube-api-access-shm9j\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.557836 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-logs\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.557970 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.557995 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.558115 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.559593 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.612237 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660046 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptd6t\" (UniqueName: \"kubernetes.io/projected/7664a4f2-321d-4ec9-a03c-bb337fc93963-kube-api-access-ptd6t\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660094 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660155 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660176 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660198 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660212 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-logs\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660233 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shm9j\" (UniqueName: \"kubernetes.io/projected/fd1cadac-4227-4d3a-9d90-630dfa496fe6-kube-api-access-shm9j\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660250 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-logs\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660267 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660312 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660327 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660385 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660404 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.660442 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.662191 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-logs\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.662421 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.662910 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.670649 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-scripts\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.670955 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.671073 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-config-data\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.682148 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shm9j\" (UniqueName: \"kubernetes.io/projected/fd1cadac-4227-4d3a-9d90-630dfa496fe6-kube-api-access-shm9j\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.757828 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-42tmg"] Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.759001 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.763116 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.763251 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.763273 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-logs\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.763307 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.763504 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.763580 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.763692 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptd6t\" (UniqueName: \"kubernetes.io/projected/7664a4f2-321d-4ec9-a03c-bb337fc93963-kube-api-access-ptd6t\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.767031 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.767593 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.767824 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-logs\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.767964 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.767981 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.777086 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.803168 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerStarted","Data":"b237dbb0b3fabc4a63137362a724cff366dd721fe61273e69c7ef147a8986356"} Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.803488 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptd6t\" (UniqueName: \"kubernetes.io/projected/7664a4f2-321d-4ec9-a03c-bb337fc93963-kube-api-access-ptd6t\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.839928 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.872519 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:06 crc kubenswrapper[4710]: I1128 17:19:06.923728 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2w5q5"] Nov 28 17:19:06 crc kubenswrapper[4710]: W1128 17:19:06.930968 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94dc04af_7548_418b_ac27_1d7cf67a4501.slice/crio-ea59de97fa4d83cd874c8ab0a495c5b108fb81ab22b325fc1d7e06080d084230 WatchSource:0}: Error finding container ea59de97fa4d83cd874c8ab0a495c5b108fb81ab22b325fc1d7e06080d084230: Status 404 returned error can't find the container with id ea59de97fa4d83cd874c8ab0a495c5b108fb81ab22b325fc1d7e06080d084230 Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.015682 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.203632 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58777d5fd4-xrcjb"] Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.593316 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.748538 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:07 crc kubenswrapper[4710]: W1128 17:19:07.752742 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd1cadac_4227_4d3a_9d90_630dfa496fe6.slice/crio-171d8d53dce19ef45139aa753816ae9da194f08107e706b70ebe5fd7804b0f06 WatchSource:0}: Error finding container 171d8d53dce19ef45139aa753816ae9da194f08107e706b70ebe5fd7804b0f06: Status 404 returned error can't find the container with id 171d8d53dce19ef45139aa753816ae9da194f08107e706b70ebe5fd7804b0f06 Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.863065 4710 generic.go:334] "Generic (PLEG): container finished" podID="cab500d6-0a90-45c1-b760-53db118834a3" containerID="10f6124b673a813aceb84e9ef92ced2a7ba126aa788aff51c77d30ac183cac24" exitCode=0 Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.863430 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n5chx" event={"ID":"cab500d6-0a90-45c1-b760-53db118834a3","Type":"ContainerDied","Data":"10f6124b673a813aceb84e9ef92ced2a7ba126aa788aff51c77d30ac183cac24"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.874552 4710 generic.go:334] "Generic (PLEG): container finished" podID="94dc04af-7548-418b-ac27-1d7cf67a4501" containerID="a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b" exitCode=0 Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.874712 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" event={"ID":"94dc04af-7548-418b-ac27-1d7cf67a4501","Type":"ContainerDied","Data":"a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.874833 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" event={"ID":"94dc04af-7548-418b-ac27-1d7cf67a4501","Type":"ContainerStarted","Data":"ea59de97fa4d83cd874c8ab0a495c5b108fb81ab22b325fc1d7e06080d084230"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.893685 4710 generic.go:334] "Generic (PLEG): container finished" podID="d3dcaf00-e307-4c65-9609-917193115f81" containerID="70678d1165086e897d81075f1ebba9f81119118810ffe0772c44e742c3817535" exitCode=0 Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.893746 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" event={"ID":"d3dcaf00-e307-4c65-9609-917193115f81","Type":"ContainerDied","Data":"70678d1165086e897d81075f1ebba9f81119118810ffe0772c44e742c3817535"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.893784 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" event={"ID":"d3dcaf00-e307-4c65-9609-917193115f81","Type":"ContainerStarted","Data":"9191330a63b05adffd56048cdf6950b98a5fb80b32c4fa26fcd9c61e20ac60da"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.918950 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7664a4f2-321d-4ec9-a03c-bb337fc93963","Type":"ContainerStarted","Data":"cf3122b261c4cd1b5adc46c4874ddd68bc06c7b76ebcaa3fe7905523e1a5463c"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.926239 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd1cadac-4227-4d3a-9d90-630dfa496fe6","Type":"ContainerStarted","Data":"171d8d53dce19ef45139aa753816ae9da194f08107e706b70ebe5fd7804b0f06"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.951819 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58777d5fd4-xrcjb" event={"ID":"18baf4b3-8f80-42fa-8291-377b5ae88a92","Type":"ContainerStarted","Data":"2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.951862 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58777d5fd4-xrcjb" event={"ID":"18baf4b3-8f80-42fa-8291-377b5ae88a92","Type":"ContainerStarted","Data":"466005bb4ac669fe52e9cd930a3cd4c5f5849bd7260166d1c1753867367d0a4e"} Nov 28 17:19:07 crc kubenswrapper[4710]: I1128 17:19:07.952082 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.041798 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58777d5fd4-xrcjb" podStartSLOduration=2.041782303 podStartE2EDuration="2.041782303s" podCreationTimestamp="2025-11-28 17:19:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:07.975080298 +0000 UTC m=+1237.233380353" watchObservedRunningTime="2025-11-28 17:19:08.041782303 +0000 UTC m=+1237.300082348" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.378779 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.528000 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-config\") pod \"d3dcaf00-e307-4c65-9609-917193115f81\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.528236 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-sb\") pod \"d3dcaf00-e307-4c65-9609-917193115f81\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.528401 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-swift-storage-0\") pod \"d3dcaf00-e307-4c65-9609-917193115f81\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.528470 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-svc\") pod \"d3dcaf00-e307-4c65-9609-917193115f81\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.528497 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-nb\") pod \"d3dcaf00-e307-4c65-9609-917193115f81\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.528513 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v9hm\" (UniqueName: \"kubernetes.io/projected/d3dcaf00-e307-4c65-9609-917193115f81-kube-api-access-5v9hm\") pod \"d3dcaf00-e307-4c65-9609-917193115f81\" (UID: \"d3dcaf00-e307-4c65-9609-917193115f81\") " Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.543215 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3dcaf00-e307-4c65-9609-917193115f81-kube-api-access-5v9hm" (OuterVolumeSpecName: "kube-api-access-5v9hm") pod "d3dcaf00-e307-4c65-9609-917193115f81" (UID: "d3dcaf00-e307-4c65-9609-917193115f81"). InnerVolumeSpecName "kube-api-access-5v9hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.569886 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3dcaf00-e307-4c65-9609-917193115f81" (UID: "d3dcaf00-e307-4c65-9609-917193115f81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.572336 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-config" (OuterVolumeSpecName: "config") pod "d3dcaf00-e307-4c65-9609-917193115f81" (UID: "d3dcaf00-e307-4c65-9609-917193115f81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.573394 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3dcaf00-e307-4c65-9609-917193115f81" (UID: "d3dcaf00-e307-4c65-9609-917193115f81"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.621377 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3dcaf00-e307-4c65-9609-917193115f81" (UID: "d3dcaf00-e307-4c65-9609-917193115f81"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.631219 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.631253 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.631266 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.631279 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v9hm\" (UniqueName: \"kubernetes.io/projected/d3dcaf00-e307-4c65-9609-917193115f81-kube-api-access-5v9hm\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.631294 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.640405 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3dcaf00-e307-4c65-9609-917193115f81" (UID: "d3dcaf00-e307-4c65-9609-917193115f81"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:08 crc kubenswrapper[4710]: I1128 17:19:08.733808 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3dcaf00-e307-4c65-9609-917193115f81-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.002930 4710 generic.go:334] "Generic (PLEG): container finished" podID="0502e48f-0338-42fa-9403-e87c11997261" containerID="2a2518cb61eda9edc870303286bc6c255c0b39265f87554c7f3078eb3c5546c3" exitCode=0 Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.003015 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mtmgk" event={"ID":"0502e48f-0338-42fa-9403-e87c11997261","Type":"ContainerDied","Data":"2a2518cb61eda9edc870303286bc6c255c0b39265f87554c7f3078eb3c5546c3"} Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.009127 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58777d5fd4-xrcjb" event={"ID":"18baf4b3-8f80-42fa-8291-377b5ae88a92","Type":"ContainerStarted","Data":"26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12"} Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.040326 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" event={"ID":"94dc04af-7548-418b-ac27-1d7cf67a4501","Type":"ContainerStarted","Data":"7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134"} Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.040946 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.050513 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" event={"ID":"d3dcaf00-e307-4c65-9609-917193115f81","Type":"ContainerDied","Data":"9191330a63b05adffd56048cdf6950b98a5fb80b32c4fa26fcd9c61e20ac60da"} Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.050573 4710 scope.go:117] "RemoveContainer" containerID="70678d1165086e897d81075f1ebba9f81119118810ffe0772c44e742c3817535" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.050602 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-42tmg" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.053436 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7664a4f2-321d-4ec9-a03c-bb337fc93963","Type":"ContainerStarted","Data":"bcc722bb92d4167d6d73c663f97919e03d361296c444ddab88cad2558e668697"} Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.057646 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd1cadac-4227-4d3a-9d90-630dfa496fe6","Type":"ContainerStarted","Data":"8c3d1c7341d2b32445bf57527cb1a9496a13dc815b67b64f5a5dbc571a3ee417"} Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.106668 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" podStartSLOduration=4.106648272 podStartE2EDuration="4.106648272s" podCreationTimestamp="2025-11-28 17:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:09.092298182 +0000 UTC m=+1238.350598227" watchObservedRunningTime="2025-11-28 17:19:09.106648272 +0000 UTC m=+1238.364948317" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.202156 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-42tmg"] Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.217883 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-42tmg"] Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.576243 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n5chx" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.649082 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab500d6-0a90-45c1-b760-53db118834a3-logs\") pod \"cab500d6-0a90-45c1-b760-53db118834a3\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.649131 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-scripts\") pod \"cab500d6-0a90-45c1-b760-53db118834a3\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.649264 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-config-data\") pod \"cab500d6-0a90-45c1-b760-53db118834a3\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.649306 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdhrm\" (UniqueName: \"kubernetes.io/projected/cab500d6-0a90-45c1-b760-53db118834a3-kube-api-access-gdhrm\") pod \"cab500d6-0a90-45c1-b760-53db118834a3\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.649352 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-combined-ca-bundle\") pod \"cab500d6-0a90-45c1-b760-53db118834a3\" (UID: \"cab500d6-0a90-45c1-b760-53db118834a3\") " Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.649698 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cab500d6-0a90-45c1-b760-53db118834a3-logs" (OuterVolumeSpecName: "logs") pod "cab500d6-0a90-45c1-b760-53db118834a3" (UID: "cab500d6-0a90-45c1-b760-53db118834a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.656379 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cab500d6-0a90-45c1-b760-53db118834a3-kube-api-access-gdhrm" (OuterVolumeSpecName: "kube-api-access-gdhrm") pod "cab500d6-0a90-45c1-b760-53db118834a3" (UID: "cab500d6-0a90-45c1-b760-53db118834a3"). InnerVolumeSpecName "kube-api-access-gdhrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.663090 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-scripts" (OuterVolumeSpecName: "scripts") pod "cab500d6-0a90-45c1-b760-53db118834a3" (UID: "cab500d6-0a90-45c1-b760-53db118834a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.698868 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-config-data" (OuterVolumeSpecName: "config-data") pod "cab500d6-0a90-45c1-b760-53db118834a3" (UID: "cab500d6-0a90-45c1-b760-53db118834a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.712665 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cab500d6-0a90-45c1-b760-53db118834a3" (UID: "cab500d6-0a90-45c1-b760-53db118834a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.751610 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.751655 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdhrm\" (UniqueName: \"kubernetes.io/projected/cab500d6-0a90-45c1-b760-53db118834a3-kube-api-access-gdhrm\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.751669 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.751681 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cab500d6-0a90-45c1-b760-53db118834a3-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:09 crc kubenswrapper[4710]: I1128 17:19:09.751692 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cab500d6-0a90-45c1-b760-53db118834a3-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.054805 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-664bc7f8c8-z9vbx"] Nov 28 17:19:10 crc kubenswrapper[4710]: E1128 17:19:10.055227 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cab500d6-0a90-45c1-b760-53db118834a3" containerName="placement-db-sync" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.055240 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="cab500d6-0a90-45c1-b760-53db118834a3" containerName="placement-db-sync" Nov 28 17:19:10 crc kubenswrapper[4710]: E1128 17:19:10.055275 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3dcaf00-e307-4c65-9609-917193115f81" containerName="init" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.055282 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3dcaf00-e307-4c65-9609-917193115f81" containerName="init" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.055462 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab500d6-0a90-45c1-b760-53db118834a3" containerName="placement-db-sync" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.055486 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3dcaf00-e307-4c65-9609-917193115f81" containerName="init" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.056587 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.066330 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.066623 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.078800 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-664bc7f8c8-z9vbx"] Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.084033 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n5chx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.084047 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n5chx" event={"ID":"cab500d6-0a90-45c1-b760-53db118834a3","Type":"ContainerDied","Data":"136b163ad420566f7112250ea018392fd3e1a9c2eeefa16da4595e9177947635"} Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.084367 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="136b163ad420566f7112250ea018392fd3e1a9c2eeefa16da4595e9177947635" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.093769 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7664a4f2-321d-4ec9-a03c-bb337fc93963","Type":"ContainerStarted","Data":"ce722b2f517a4a73d9b89750be915e23eea4c3e6d9f710cb3d3c911f25205c7a"} Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.100318 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd1cadac-4227-4d3a-9d90-630dfa496fe6","Type":"ContainerStarted","Data":"1df654bcc1882852f4f58d3c7a19bf4c62ab0433cf2fb1c7ca0c4cd2973e7298"} Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.129124 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.1291038 podStartE2EDuration="5.1291038s" podCreationTimestamp="2025-11-28 17:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:10.114079148 +0000 UTC m=+1239.372379193" watchObservedRunningTime="2025-11-28 17:19:10.1291038 +0000 UTC m=+1239.387403845" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.159618 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-public-tls-certs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.159679 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4930075-1fb1-4342-af3e-62e0c0f249d1-logs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.159825 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-config-data\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.159924 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-combined-ca-bundle\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.159981 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-internal-tls-certs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.160005 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvss\" (UniqueName: \"kubernetes.io/projected/b4930075-1fb1-4342-af3e-62e0c0f249d1-kube-api-access-jxvss\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.160061 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-scripts\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.160114 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.160102114 podStartE2EDuration="5.160102114s" podCreationTimestamp="2025-11-28 17:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:10.148090776 +0000 UTC m=+1239.406390821" watchObservedRunningTime="2025-11-28 17:19:10.160102114 +0000 UTC m=+1239.418402149" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.260726 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-internal-tls-certs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.260792 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxvss\" (UniqueName: \"kubernetes.io/projected/b4930075-1fb1-4342-af3e-62e0c0f249d1-kube-api-access-jxvss\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.260828 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-scripts\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.260931 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-public-tls-certs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.260970 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4930075-1fb1-4342-af3e-62e0c0f249d1-logs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.261058 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-config-data\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.261123 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-combined-ca-bundle\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.261554 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b4930075-1fb1-4342-af3e-62e0c0f249d1-logs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.266464 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-internal-tls-certs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.268224 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-config-data\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.269212 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-combined-ca-bundle\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.271479 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-scripts\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.284307 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b4930075-1fb1-4342-af3e-62e0c0f249d1-public-tls-certs\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.299700 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxvss\" (UniqueName: \"kubernetes.io/projected/b4930075-1fb1-4342-af3e-62e0c0f249d1-kube-api-access-jxvss\") pod \"placement-664bc7f8c8-z9vbx\" (UID: \"b4930075-1fb1-4342-af3e-62e0c0f249d1\") " pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.324557 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.399060 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:10 crc kubenswrapper[4710]: I1128 17:19:10.448808 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:11 crc kubenswrapper[4710]: I1128 17:19:11.155165 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3dcaf00-e307-4c65-9609-917193115f81" path="/var/lib/kubelet/pods/d3dcaf00-e307-4c65-9609-917193115f81/volumes" Nov 28 17:19:11 crc kubenswrapper[4710]: I1128 17:19:11.970293 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-654d6f49b5-qjswk"] Nov 28 17:19:11 crc kubenswrapper[4710]: I1128 17:19:11.972315 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:11 crc kubenswrapper[4710]: I1128 17:19:11.975938 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 28 17:19:11 crc kubenswrapper[4710]: I1128 17:19:11.976237 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 28 17:19:11 crc kubenswrapper[4710]: I1128 17:19:11.993614 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-654d6f49b5-qjswk"] Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.110380 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-httpd-config\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.110856 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-internal-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.110889 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-combined-ca-bundle\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.110929 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-public-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.110958 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-config\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.110990 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnr9g\" (UniqueName: \"kubernetes.io/projected/8c44bf34-558b-4635-9122-b144d09c7085-kube-api-access-pnr9g\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.111016 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-ovndb-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.122920 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerName="glance-log" containerID="cri-o://8c3d1c7341d2b32445bf57527cb1a9496a13dc815b67b64f5a5dbc571a3ee417" gracePeriod=30 Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.122997 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerName="glance-httpd" containerID="cri-o://1df654bcc1882852f4f58d3c7a19bf4c62ab0433cf2fb1c7ca0c4cd2973e7298" gracePeriod=30 Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.123302 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerName="glance-log" containerID="cri-o://bcc722bb92d4167d6d73c663f97919e03d361296c444ddab88cad2558e668697" gracePeriod=30 Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.123313 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerName="glance-httpd" containerID="cri-o://ce722b2f517a4a73d9b89750be915e23eea4c3e6d9f710cb3d3c911f25205c7a" gracePeriod=30 Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.212715 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-httpd-config\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.212830 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-internal-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.212863 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-combined-ca-bundle\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.212904 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-public-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.212933 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-config\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.212966 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnr9g\" (UniqueName: \"kubernetes.io/projected/8c44bf34-558b-4635-9122-b144d09c7085-kube-api-access-pnr9g\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.212989 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-ovndb-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.219561 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-ovndb-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.219578 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-public-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.220583 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-httpd-config\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.223088 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-config\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.237238 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-combined-ca-bundle\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.251071 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c44bf34-558b-4635-9122-b144d09c7085-internal-tls-certs\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.254828 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnr9g\" (UniqueName: \"kubernetes.io/projected/8c44bf34-558b-4635-9122-b144d09c7085-kube-api-access-pnr9g\") pod \"neutron-654d6f49b5-qjswk\" (UID: \"8c44bf34-558b-4635-9122-b144d09c7085\") " pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:12 crc kubenswrapper[4710]: I1128 17:19:12.307979 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:13 crc kubenswrapper[4710]: I1128 17:19:13.136661 4710 generic.go:334] "Generic (PLEG): container finished" podID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerID="ce722b2f517a4a73d9b89750be915e23eea4c3e6d9f710cb3d3c911f25205c7a" exitCode=0 Nov 28 17:19:13 crc kubenswrapper[4710]: I1128 17:19:13.137249 4710 generic.go:334] "Generic (PLEG): container finished" podID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerID="bcc722bb92d4167d6d73c663f97919e03d361296c444ddab88cad2558e668697" exitCode=143 Nov 28 17:19:13 crc kubenswrapper[4710]: I1128 17:19:13.136803 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7664a4f2-321d-4ec9-a03c-bb337fc93963","Type":"ContainerDied","Data":"ce722b2f517a4a73d9b89750be915e23eea4c3e6d9f710cb3d3c911f25205c7a"} Nov 28 17:19:13 crc kubenswrapper[4710]: I1128 17:19:13.137570 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7664a4f2-321d-4ec9-a03c-bb337fc93963","Type":"ContainerDied","Data":"bcc722bb92d4167d6d73c663f97919e03d361296c444ddab88cad2558e668697"} Nov 28 17:19:13 crc kubenswrapper[4710]: I1128 17:19:13.141192 4710 generic.go:334] "Generic (PLEG): container finished" podID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerID="1df654bcc1882852f4f58d3c7a19bf4c62ab0433cf2fb1c7ca0c4cd2973e7298" exitCode=0 Nov 28 17:19:13 crc kubenswrapper[4710]: I1128 17:19:13.141228 4710 generic.go:334] "Generic (PLEG): container finished" podID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerID="8c3d1c7341d2b32445bf57527cb1a9496a13dc815b67b64f5a5dbc571a3ee417" exitCode=143 Nov 28 17:19:13 crc kubenswrapper[4710]: I1128 17:19:13.159934 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd1cadac-4227-4d3a-9d90-630dfa496fe6","Type":"ContainerDied","Data":"1df654bcc1882852f4f58d3c7a19bf4c62ab0433cf2fb1c7ca0c4cd2973e7298"} Nov 28 17:19:13 crc kubenswrapper[4710]: I1128 17:19:13.159974 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd1cadac-4227-4d3a-9d90-630dfa496fe6","Type":"ContainerDied","Data":"8c3d1c7341d2b32445bf57527cb1a9496a13dc815b67b64f5a5dbc571a3ee417"} Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.003545 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.051466 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz8rr\" (UniqueName: \"kubernetes.io/projected/0502e48f-0338-42fa-9403-e87c11997261-kube-api-access-mz8rr\") pod \"0502e48f-0338-42fa-9403-e87c11997261\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.051573 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-scripts\") pod \"0502e48f-0338-42fa-9403-e87c11997261\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.051601 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-fernet-keys\") pod \"0502e48f-0338-42fa-9403-e87c11997261\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.051690 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-combined-ca-bundle\") pod \"0502e48f-0338-42fa-9403-e87c11997261\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.051770 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-config-data\") pod \"0502e48f-0338-42fa-9403-e87c11997261\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.051802 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-credential-keys\") pod \"0502e48f-0338-42fa-9403-e87c11997261\" (UID: \"0502e48f-0338-42fa-9403-e87c11997261\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.063951 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-scripts" (OuterVolumeSpecName: "scripts") pod "0502e48f-0338-42fa-9403-e87c11997261" (UID: "0502e48f-0338-42fa-9403-e87c11997261"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.063998 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "0502e48f-0338-42fa-9403-e87c11997261" (UID: "0502e48f-0338-42fa-9403-e87c11997261"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.064004 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0502e48f-0338-42fa-9403-e87c11997261-kube-api-access-mz8rr" (OuterVolumeSpecName: "kube-api-access-mz8rr") pod "0502e48f-0338-42fa-9403-e87c11997261" (UID: "0502e48f-0338-42fa-9403-e87c11997261"). InnerVolumeSpecName "kube-api-access-mz8rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.064080 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "0502e48f-0338-42fa-9403-e87c11997261" (UID: "0502e48f-0338-42fa-9403-e87c11997261"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.107559 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0502e48f-0338-42fa-9403-e87c11997261" (UID: "0502e48f-0338-42fa-9403-e87c11997261"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.123034 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-config-data" (OuterVolumeSpecName: "config-data") pod "0502e48f-0338-42fa-9403-e87c11997261" (UID: "0502e48f-0338-42fa-9403-e87c11997261"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.153702 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz8rr\" (UniqueName: \"kubernetes.io/projected/0502e48f-0338-42fa-9403-e87c11997261-kube-api-access-mz8rr\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.153736 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.153747 4710 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.153761 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.153788 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.153799 4710 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0502e48f-0338-42fa-9403-e87c11997261-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.164215 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerStarted","Data":"9e12adecb1b3b33184238b5cb2c9c403c57d9e6c4a87289108280819023e39e5"} Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.166246 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mtmgk" event={"ID":"0502e48f-0338-42fa-9403-e87c11997261","Type":"ContainerDied","Data":"93bd3eaa5d9c28008520db735242c208de69c1e7f8c7927dc29d57a17d8fbed4"} Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.166296 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93bd3eaa5d9c28008520db735242c208de69c1e7f8c7927dc29d57a17d8fbed4" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.166368 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mtmgk" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.260477 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-664bc7f8c8-z9vbx"] Nov 28 17:19:14 crc kubenswrapper[4710]: W1128 17:19:14.263572 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb4930075_1fb1_4342_af3e_62e0c0f249d1.slice/crio-1c76d0d2bb95273f30b56bcc89b8a506d82e5dfc5504e537f7fad4c4ff84dfd6 WatchSource:0}: Error finding container 1c76d0d2bb95273f30b56bcc89b8a506d82e5dfc5504e537f7fad4c4ff84dfd6: Status 404 returned error can't find the container with id 1c76d0d2bb95273f30b56bcc89b8a506d82e5dfc5504e537f7fad4c4ff84dfd6 Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.347472 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.459057 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-combined-ca-bundle\") pod \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.459226 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-config-data\") pod \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.459251 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.459333 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shm9j\" (UniqueName: \"kubernetes.io/projected/fd1cadac-4227-4d3a-9d90-630dfa496fe6-kube-api-access-shm9j\") pod \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.459368 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-scripts\") pod \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.459451 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-logs\") pod \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.459488 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-httpd-run\") pod \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\" (UID: \"fd1cadac-4227-4d3a-9d90-630dfa496fe6\") " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.460423 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fd1cadac-4227-4d3a-9d90-630dfa496fe6" (UID: "fd1cadac-4227-4d3a-9d90-630dfa496fe6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.461920 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-logs" (OuterVolumeSpecName: "logs") pod "fd1cadac-4227-4d3a-9d90-630dfa496fe6" (UID: "fd1cadac-4227-4d3a-9d90-630dfa496fe6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.468016 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-scripts" (OuterVolumeSpecName: "scripts") pod "fd1cadac-4227-4d3a-9d90-630dfa496fe6" (UID: "fd1cadac-4227-4d3a-9d90-630dfa496fe6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.468284 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "fd1cadac-4227-4d3a-9d90-630dfa496fe6" (UID: "fd1cadac-4227-4d3a-9d90-630dfa496fe6"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.473244 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd1cadac-4227-4d3a-9d90-630dfa496fe6-kube-api-access-shm9j" (OuterVolumeSpecName: "kube-api-access-shm9j") pod "fd1cadac-4227-4d3a-9d90-630dfa496fe6" (UID: "fd1cadac-4227-4d3a-9d90-630dfa496fe6"). InnerVolumeSpecName "kube-api-access-shm9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.502988 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fd1cadac-4227-4d3a-9d90-630dfa496fe6" (UID: "fd1cadac-4227-4d3a-9d90-630dfa496fe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.541592 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-config-data" (OuterVolumeSpecName: "config-data") pod "fd1cadac-4227-4d3a-9d90-630dfa496fe6" (UID: "fd1cadac-4227-4d3a-9d90-630dfa496fe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.574105 4710 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.574149 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.574165 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.574208 4710 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.574224 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shm9j\" (UniqueName: \"kubernetes.io/projected/fd1cadac-4227-4d3a-9d90-630dfa496fe6-kube-api-access-shm9j\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.574237 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd1cadac-4227-4d3a-9d90-630dfa496fe6-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.574248 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd1cadac-4227-4d3a-9d90-630dfa496fe6-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:14 crc kubenswrapper[4710]: W1128 17:19:14.577261 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c44bf34_558b_4635_9122_b144d09c7085.slice/crio-63639d9525707f8adb63585bb943acf18dbc3b81e8bcb3d21fe87428112f86db WatchSource:0}: Error finding container 63639d9525707f8adb63585bb943acf18dbc3b81e8bcb3d21fe87428112f86db: Status 404 returned error can't find the container with id 63639d9525707f8adb63585bb943acf18dbc3b81e8bcb3d21fe87428112f86db Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.586796 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-654d6f49b5-qjswk"] Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.603756 4710 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 28 17:19:14 crc kubenswrapper[4710]: I1128 17:19:14.675579 4710 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.200378 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-654d6f49b5-qjswk" event={"ID":"8c44bf34-558b-4635-9122-b144d09c7085","Type":"ContainerStarted","Data":"4e82413d0749978abd124ddb05cdbb7f2868e3b9b5c62be95e8023f01b6d86f0"} Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.200695 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-654d6f49b5-qjswk" event={"ID":"8c44bf34-558b-4635-9122-b144d09c7085","Type":"ContainerStarted","Data":"63639d9525707f8adb63585bb943acf18dbc3b81e8bcb3d21fe87428112f86db"} Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.206474 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664bc7f8c8-z9vbx" event={"ID":"b4930075-1fb1-4342-af3e-62e0c0f249d1","Type":"ContainerStarted","Data":"521bafcfbd46c3cb1260b36bdd6b79e0d371abd18b5cabc5cb74bb00faeb76b2"} Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.206498 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664bc7f8c8-z9vbx" event={"ID":"b4930075-1fb1-4342-af3e-62e0c0f249d1","Type":"ContainerStarted","Data":"e1f814ad68199f148dc4b15dd1887404e5e9d45c523065590fd7ab2abaa2871b"} Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.206506 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-664bc7f8c8-z9vbx" event={"ID":"b4930075-1fb1-4342-af3e-62e0c0f249d1","Type":"ContainerStarted","Data":"1c76d0d2bb95273f30b56bcc89b8a506d82e5dfc5504e537f7fad4c4ff84dfd6"} Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.208401 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7559b9d56c-625td"] Nov 28 17:19:15 crc kubenswrapper[4710]: E1128 17:19:15.208821 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerName="glance-httpd" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.208833 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerName="glance-httpd" Nov 28 17:19:15 crc kubenswrapper[4710]: E1128 17:19:15.208866 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0502e48f-0338-42fa-9403-e87c11997261" containerName="keystone-bootstrap" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.208874 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="0502e48f-0338-42fa-9403-e87c11997261" containerName="keystone-bootstrap" Nov 28 17:19:15 crc kubenswrapper[4710]: E1128 17:19:15.208889 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerName="glance-log" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.208895 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerName="glance-log" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.209078 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerName="glance-log" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.209092 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="0502e48f-0338-42fa-9403-e87c11997261" containerName="keystone-bootstrap" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.209109 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" containerName="glance-httpd" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.210104 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.219200 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.219289 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.230915 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.231169 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.231333 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xmd8n" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.231441 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.231589 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.231694 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.238567 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7559b9d56c-625td"] Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.278009 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fd1cadac-4227-4d3a-9d90-630dfa496fe6","Type":"ContainerDied","Data":"171d8d53dce19ef45139aa753816ae9da194f08107e706b70ebe5fd7804b0f06"} Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.278063 4710 scope.go:117] "RemoveContainer" containerID="1df654bcc1882852f4f58d3c7a19bf4c62ab0433cf2fb1c7ca0c4cd2973e7298" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.278200 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.286928 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-scripts\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.286999 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-internal-tls-certs\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.287066 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-fernet-keys\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.287122 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-credential-keys\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.287179 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-combined-ca-bundle\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.287250 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-config-data\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.287327 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-public-tls-certs\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.287365 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kqsh\" (UniqueName: \"kubernetes.io/projected/c67d7e30-dd12-4650-9063-cb49b972e3b5-kube-api-access-5kqsh\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.310000 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9mv8x" event={"ID":"f03a0db7-fab9-4d77-8f2e-368c122983ca","Type":"ContainerStarted","Data":"7d5076a971ad39755d96e5c6f6fb865b1796577214752160bf982a0ee5c69b44"} Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.311616 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-664bc7f8c8-z9vbx" podStartSLOduration=5.311596013 podStartE2EDuration="5.311596013s" podCreationTimestamp="2025-11-28 17:19:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:15.277089169 +0000 UTC m=+1244.535389214" watchObservedRunningTime="2025-11-28 17:19:15.311596013 +0000 UTC m=+1244.569896058" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.389312 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-public-tls-certs\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.389375 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kqsh\" (UniqueName: \"kubernetes.io/projected/c67d7e30-dd12-4650-9063-cb49b972e3b5-kube-api-access-5kqsh\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.389421 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-scripts\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.389451 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-internal-tls-certs\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.389505 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-fernet-keys\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.389546 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-credential-keys\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.389587 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-combined-ca-bundle\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.389639 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-config-data\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.404790 4710 scope.go:117] "RemoveContainer" containerID="8c3d1c7341d2b32445bf57527cb1a9496a13dc815b67b64f5a5dbc571a3ee417" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.409318 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-scripts\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.409387 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-fernet-keys\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.410169 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-public-tls-certs\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.410292 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-combined-ca-bundle\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.411173 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-credential-keys\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.411181 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-internal-tls-certs\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.415035 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kqsh\" (UniqueName: \"kubernetes.io/projected/c67d7e30-dd12-4650-9063-cb49b972e3b5-kube-api-access-5kqsh\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.416311 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.419533 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c67d7e30-dd12-4650-9063-cb49b972e3b5-config-data\") pod \"keystone-7559b9d56c-625td\" (UID: \"c67d7e30-dd12-4650-9063-cb49b972e3b5\") " pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.439427 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.463585 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.473204 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:15 crc kubenswrapper[4710]: E1128 17:19:15.473624 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerName="glance-httpd" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.473636 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerName="glance-httpd" Nov 28 17:19:15 crc kubenswrapper[4710]: E1128 17:19:15.473677 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerName="glance-log" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.473683 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerName="glance-log" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.473878 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerName="glance-log" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.473890 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" containerName="glance-httpd" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.474872 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.477287 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.478287 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.479492 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9mv8x" podStartSLOduration=3.13708738 podStartE2EDuration="37.479477035s" podCreationTimestamp="2025-11-28 17:18:38 +0000 UTC" firstStartedPulling="2025-11-28 17:18:40.270976952 +0000 UTC m=+1209.529276997" lastFinishedPulling="2025-11-28 17:19:14.613366607 +0000 UTC m=+1243.871666652" observedRunningTime="2025-11-28 17:19:15.372459144 +0000 UTC m=+1244.630759209" watchObservedRunningTime="2025-11-28 17:19:15.479477035 +0000 UTC m=+1244.737777080" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.506689 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.592256 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptd6t\" (UniqueName: \"kubernetes.io/projected/7664a4f2-321d-4ec9-a03c-bb337fc93963-kube-api-access-ptd6t\") pod \"7664a4f2-321d-4ec9-a03c-bb337fc93963\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.592366 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"7664a4f2-321d-4ec9-a03c-bb337fc93963\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.592452 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-scripts\") pod \"7664a4f2-321d-4ec9-a03c-bb337fc93963\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.592493 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-httpd-run\") pod \"7664a4f2-321d-4ec9-a03c-bb337fc93963\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.592524 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-combined-ca-bundle\") pod \"7664a4f2-321d-4ec9-a03c-bb337fc93963\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593049 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-logs\") pod \"7664a4f2-321d-4ec9-a03c-bb337fc93963\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593079 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7664a4f2-321d-4ec9-a03c-bb337fc93963" (UID: "7664a4f2-321d-4ec9-a03c-bb337fc93963"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593172 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-config-data\") pod \"7664a4f2-321d-4ec9-a03c-bb337fc93963\" (UID: \"7664a4f2-321d-4ec9-a03c-bb337fc93963\") " Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593453 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-scripts\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593497 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593570 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-logs\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593592 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-logs" (OuterVolumeSpecName: "logs") pod "7664a4f2-321d-4ec9-a03c-bb337fc93963" (UID: "7664a4f2-321d-4ec9-a03c-bb337fc93963"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593618 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-config-data\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593782 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.593833 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4m6g\" (UniqueName: \"kubernetes.io/projected/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-kube-api-access-f4m6g\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.594073 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.594208 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.594384 4710 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.594407 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7664a4f2-321d-4ec9-a03c-bb337fc93963-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.609980 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "7664a4f2-321d-4ec9-a03c-bb337fc93963" (UID: "7664a4f2-321d-4ec9-a03c-bb337fc93963"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.609988 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-scripts" (OuterVolumeSpecName: "scripts") pod "7664a4f2-321d-4ec9-a03c-bb337fc93963" (UID: "7664a4f2-321d-4ec9-a03c-bb337fc93963"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.610012 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7664a4f2-321d-4ec9-a03c-bb337fc93963-kube-api-access-ptd6t" (OuterVolumeSpecName: "kube-api-access-ptd6t") pod "7664a4f2-321d-4ec9-a03c-bb337fc93963" (UID: "7664a4f2-321d-4ec9-a03c-bb337fc93963"). InnerVolumeSpecName "kube-api-access-ptd6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.627502 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.629626 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7664a4f2-321d-4ec9-a03c-bb337fc93963" (UID: "7664a4f2-321d-4ec9-a03c-bb337fc93963"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.681258 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-config-data" (OuterVolumeSpecName: "config-data") pod "7664a4f2-321d-4ec9-a03c-bb337fc93963" (UID: "7664a4f2-321d-4ec9-a03c-bb337fc93963"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696242 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696297 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4m6g\" (UniqueName: \"kubernetes.io/projected/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-kube-api-access-f4m6g\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696409 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696468 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696508 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-scripts\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696540 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696596 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-logs\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696693 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-config-data\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696749 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696766 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptd6t\" (UniqueName: \"kubernetes.io/projected/7664a4f2-321d-4ec9-a03c-bb337fc93963-kube-api-access-ptd6t\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696803 4710 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696816 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.696830 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7664a4f2-321d-4ec9-a03c-bb337fc93963-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.697190 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.697623 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.698994 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-logs\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.702725 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.703404 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-scripts\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.708828 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.727131 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-config-data\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.729494 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4m6g\" (UniqueName: \"kubernetes.io/projected/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-kube-api-access-f4m6g\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.738010 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " pod="openstack/glance-default-external-api-0" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.747321 4710 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.799557 4710 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:15 crc kubenswrapper[4710]: I1128 17:19:15.813103 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.162328 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7559b9d56c-625td"] Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.251960 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.355208 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-654d6f49b5-qjswk" event={"ID":"8c44bf34-558b-4635-9122-b144d09c7085","Type":"ContainerStarted","Data":"1c78ff210b6fe1d91b4ff057c38d5f5c61abac5ccb31c976a7a5297025646a59"} Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.356245 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.357943 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7559b9d56c-625td" event={"ID":"c67d7e30-dd12-4650-9063-cb49b972e3b5","Type":"ContainerStarted","Data":"ee2f91d31de96bc1447addfae6ea590da09fe65ebe0f4ddab5256537de18dec5"} Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.358373 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-w2dhd"] Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.358600 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" podUID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" containerName="dnsmasq-dns" containerID="cri-o://0c3e977d07f16304038ca3631b473fd5771aab94460313f05e917c49bfdc1c79" gracePeriod=10 Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.366973 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.367128 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7664a4f2-321d-4ec9-a03c-bb337fc93963","Type":"ContainerDied","Data":"cf3122b261c4cd1b5adc46c4874ddd68bc06c7b76ebcaa3fe7905523e1a5463c"} Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.367184 4710 scope.go:117] "RemoveContainer" containerID="ce722b2f517a4a73d9b89750be915e23eea4c3e6d9f710cb3d3c911f25205c7a" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.398394 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-654d6f49b5-qjswk" podStartSLOduration=5.39837625 podStartE2EDuration="5.39837625s" podCreationTimestamp="2025-11-28 17:19:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:16.390555345 +0000 UTC m=+1245.648855390" watchObservedRunningTime="2025-11-28 17:19:16.39837625 +0000 UTC m=+1245.656676295" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.440994 4710 scope.go:117] "RemoveContainer" containerID="bcc722bb92d4167d6d73c663f97919e03d361296c444ddab88cad2558e668697" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.459850 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.490305 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.509314 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.533042 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.535531 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.540236 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.540673 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.555909 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.625168 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfgkz\" (UniqueName: \"kubernetes.io/projected/f59c3678-cb58-4462-9ef6-7d91911117ee-kube-api-access-mfgkz\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.625265 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.625310 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.625374 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.626033 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.626071 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.626088 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.626107 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.729994 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730047 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730080 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730224 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfgkz\" (UniqueName: \"kubernetes.io/projected/f59c3678-cb58-4462-9ef6-7d91911117ee-kube-api-access-mfgkz\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730274 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730339 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730450 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730488 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730493 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.730637 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.731023 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-logs\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.737481 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.737712 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.738206 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.745676 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.748865 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfgkz\" (UniqueName: \"kubernetes.io/projected/f59c3678-cb58-4462-9ef6-7d91911117ee-kube-api-access-mfgkz\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.776888 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:19:16 crc kubenswrapper[4710]: I1128 17:19:16.871319 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.156634 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7664a4f2-321d-4ec9-a03c-bb337fc93963" path="/var/lib/kubelet/pods/7664a4f2-321d-4ec9-a03c-bb337fc93963/volumes" Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.158871 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd1cadac-4227-4d3a-9d90-630dfa496fe6" path="/var/lib/kubelet/pods/fd1cadac-4227-4d3a-9d90-630dfa496fe6/volumes" Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.399933 4710 generic.go:334] "Generic (PLEG): container finished" podID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" containerID="0c3e977d07f16304038ca3631b473fd5771aab94460313f05e917c49bfdc1c79" exitCode=0 Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.400030 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" event={"ID":"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a","Type":"ContainerDied","Data":"0c3e977d07f16304038ca3631b473fd5771aab94460313f05e917c49bfdc1c79"} Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.404170 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7559b9d56c-625td" event={"ID":"c67d7e30-dd12-4650-9063-cb49b972e3b5","Type":"ContainerStarted","Data":"2c799cf8e4fc1219a81eb3f76d035f4ac8c33d1e3291c3332422260e8eeae11a"} Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.404309 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.416236 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"661e3628-1a58-4dda-8cb6-c07c13c5b7f3","Type":"ContainerStarted","Data":"629863b32cabb090c5f186c7a3eec3329a75a9a9b11963440dfac8179015b25b"} Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.416287 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"661e3628-1a58-4dda-8cb6-c07c13c5b7f3","Type":"ContainerStarted","Data":"c32bbc3a5d5354599ab40968c7b5b6d6ecbcd154101d1cc67134b4cf7dce52e4"} Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.449606 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7559b9d56c-625td" podStartSLOduration=2.449588741 podStartE2EDuration="2.449588741s" podCreationTimestamp="2025-11-28 17:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:17.439066161 +0000 UTC m=+1246.697366216" watchObservedRunningTime="2025-11-28 17:19:17.449588741 +0000 UTC m=+1246.707888786" Nov 28 17:19:17 crc kubenswrapper[4710]: I1128 17:19:17.472425 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:19:17 crc kubenswrapper[4710]: W1128 17:19:17.504384 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf59c3678_cb58_4462_9ef6_7d91911117ee.slice/crio-dc0082634e8ee9745f839750fd75a3afb92e1babe0fae946886b75a141d06e14 WatchSource:0}: Error finding container dc0082634e8ee9745f839750fd75a3afb92e1babe0fae946886b75a141d06e14: Status 404 returned error can't find the container with id dc0082634e8ee9745f839750fd75a3afb92e1babe0fae946886b75a141d06e14 Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.185405 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.260438 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbnhf\" (UniqueName: \"kubernetes.io/projected/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-kube-api-access-gbnhf\") pod \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.260522 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-svc\") pod \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.260688 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-config\") pod \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.260859 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-nb\") pod \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.260904 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-swift-storage-0\") pod \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.261003 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-sb\") pod \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\" (UID: \"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a\") " Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.286465 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-kube-api-access-gbnhf" (OuterVolumeSpecName: "kube-api-access-gbnhf") pod "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" (UID: "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a"). InnerVolumeSpecName "kube-api-access-gbnhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.366891 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbnhf\" (UniqueName: \"kubernetes.io/projected/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-kube-api-access-gbnhf\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.438190 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f59c3678-cb58-4462-9ef6-7d91911117ee","Type":"ContainerStarted","Data":"86b3dfd43dbd66f7b02cf8515b1bff02bbc8ae27511e132ec3c0b461f4a4d40e"} Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.438227 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f59c3678-cb58-4462-9ef6-7d91911117ee","Type":"ContainerStarted","Data":"dc0082634e8ee9745f839750fd75a3afb92e1babe0fae946886b75a141d06e14"} Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.450962 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" (UID: "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.451128 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" (UID: "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.451377 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.451550 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-w2dhd" event={"ID":"1857f5f6-dbe0-4211-9376-0d30a3d9eb8a","Type":"ContainerDied","Data":"4cc7350e48608ceb5fb99ca94e32fda84d43348dabd2a3529a15584edeb04871"} Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.451622 4710 scope.go:117] "RemoveContainer" containerID="0c3e977d07f16304038ca3631b473fd5771aab94460313f05e917c49bfdc1c79" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.460210 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" (UID: "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.470499 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.476019 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.476039 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.473322 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-config" (OuterVolumeSpecName: "config") pod "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" (UID: "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.496238 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" (UID: "1857f5f6-dbe0-4211-9376-0d30a3d9eb8a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.506140 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.506118859 podStartE2EDuration="3.506118859s" podCreationTimestamp="2025-11-28 17:19:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:18.496125405 +0000 UTC m=+1247.754425450" watchObservedRunningTime="2025-11-28 17:19:18.506118859 +0000 UTC m=+1247.764418904" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.549300 4710 scope.go:117] "RemoveContainer" containerID="b131a1072266e75edd164fc161a13921ed902b113c8b980e40744f6c93d389d8" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.578330 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.578366 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.822368 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-w2dhd"] Nov 28 17:19:18 crc kubenswrapper[4710]: I1128 17:19:18.831416 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-w2dhd"] Nov 28 17:19:19 crc kubenswrapper[4710]: I1128 17:19:19.168254 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" path="/var/lib/kubelet/pods/1857f5f6-dbe0-4211-9376-0d30a3d9eb8a/volumes" Nov 28 17:19:19 crc kubenswrapper[4710]: I1128 17:19:19.485964 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f59c3678-cb58-4462-9ef6-7d91911117ee","Type":"ContainerStarted","Data":"25cd53bf119f2d67e1e659a9f155f09ff66968f2331d5606757803945df5375c"} Nov 28 17:19:19 crc kubenswrapper[4710]: I1128 17:19:19.488154 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"661e3628-1a58-4dda-8cb6-c07c13c5b7f3","Type":"ContainerStarted","Data":"509270d07e24efd376b2c6dcbf5dcc8eb1474d6d025d23f660fc2ddadf42a597"} Nov 28 17:19:19 crc kubenswrapper[4710]: I1128 17:19:19.495288 4710 generic.go:334] "Generic (PLEG): container finished" podID="f03a0db7-fab9-4d77-8f2e-368c122983ca" containerID="7d5076a971ad39755d96e5c6f6fb865b1796577214752160bf982a0ee5c69b44" exitCode=0 Nov 28 17:19:19 crc kubenswrapper[4710]: I1128 17:19:19.495344 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9mv8x" event={"ID":"f03a0db7-fab9-4d77-8f2e-368c122983ca","Type":"ContainerDied","Data":"7d5076a971ad39755d96e5c6f6fb865b1796577214752160bf982a0ee5c69b44"} Nov 28 17:19:19 crc kubenswrapper[4710]: I1128 17:19:19.521200 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.521180904 podStartE2EDuration="3.521180904s" podCreationTimestamp="2025-11-28 17:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:19.504034326 +0000 UTC m=+1248.762334371" watchObservedRunningTime="2025-11-28 17:19:19.521180904 +0000 UTC m=+1248.779480949" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.042860 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.165536 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-combined-ca-bundle\") pod \"f03a0db7-fab9-4d77-8f2e-368c122983ca\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.166637 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjkzd\" (UniqueName: \"kubernetes.io/projected/f03a0db7-fab9-4d77-8f2e-368c122983ca-kube-api-access-zjkzd\") pod \"f03a0db7-fab9-4d77-8f2e-368c122983ca\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.167196 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-db-sync-config-data\") pod \"f03a0db7-fab9-4d77-8f2e-368c122983ca\" (UID: \"f03a0db7-fab9-4d77-8f2e-368c122983ca\") " Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.184826 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f03a0db7-fab9-4d77-8f2e-368c122983ca" (UID: "f03a0db7-fab9-4d77-8f2e-368c122983ca"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.191429 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03a0db7-fab9-4d77-8f2e-368c122983ca-kube-api-access-zjkzd" (OuterVolumeSpecName: "kube-api-access-zjkzd") pod "f03a0db7-fab9-4d77-8f2e-368c122983ca" (UID: "f03a0db7-fab9-4d77-8f2e-368c122983ca"). InnerVolumeSpecName "kube-api-access-zjkzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.205148 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f03a0db7-fab9-4d77-8f2e-368c122983ca" (UID: "f03a0db7-fab9-4d77-8f2e-368c122983ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.271381 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.271422 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjkzd\" (UniqueName: \"kubernetes.io/projected/f03a0db7-fab9-4d77-8f2e-368c122983ca-kube-api-access-zjkzd\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.271438 4710 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f03a0db7-fab9-4d77-8f2e-368c122983ca-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.526845 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9mv8x" event={"ID":"f03a0db7-fab9-4d77-8f2e-368c122983ca","Type":"ContainerDied","Data":"d319cc4472a517e8625772417e26aea344031ecda3aeb75d0b334d9bb4098414"} Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.527145 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d319cc4472a517e8625772417e26aea344031ecda3aeb75d0b334d9bb4098414" Nov 28 17:19:22 crc kubenswrapper[4710]: I1128 17:19:22.526946 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9mv8x" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.410256 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f976d8c48-8849p"] Nov 28 17:19:23 crc kubenswrapper[4710]: E1128 17:19:23.410813 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" containerName="dnsmasq-dns" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.410853 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" containerName="dnsmasq-dns" Nov 28 17:19:23 crc kubenswrapper[4710]: E1128 17:19:23.410891 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" containerName="init" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.410913 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" containerName="init" Nov 28 17:19:23 crc kubenswrapper[4710]: E1128 17:19:23.410935 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03a0db7-fab9-4d77-8f2e-368c122983ca" containerName="barbican-db-sync" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.410943 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03a0db7-fab9-4d77-8f2e-368c122983ca" containerName="barbican-db-sync" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.411241 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03a0db7-fab9-4d77-8f2e-368c122983ca" containerName="barbican-db-sync" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.411260 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="1857f5f6-dbe0-4211-9376-0d30a3d9eb8a" containerName="dnsmasq-dns" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.412553 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.421064 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-c7c6w" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.421368 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.421647 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.431331 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-676bbb9799-m7pq6"] Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.435079 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.444288 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.451863 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f976d8c48-8849p"] Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.461403 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-676bbb9799-m7pq6"] Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.498924 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-config-data-custom\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499019 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjdz8\" (UniqueName: \"kubernetes.io/projected/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-kube-api-access-bjdz8\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499060 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-combined-ca-bundle\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499104 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-config-data-custom\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499131 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a0e62fb-f82d-4585-8c51-9c3d947027e9-logs\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499190 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-combined-ca-bundle\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499282 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-logs\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499317 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvz8p\" (UniqueName: \"kubernetes.io/projected/3a0e62fb-f82d-4585-8c51-9c3d947027e9-kube-api-access-dvz8p\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499338 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-config-data\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.499372 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-config-data\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.511481 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6w5wf"] Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.519343 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.535351 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6w5wf"] Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610415 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610492 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-config\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610563 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610606 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-config-data-custom\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610637 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhgck\" (UniqueName: \"kubernetes.io/projected/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-kube-api-access-fhgck\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610672 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610708 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjdz8\" (UniqueName: \"kubernetes.io/projected/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-kube-api-access-bjdz8\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610729 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-combined-ca-bundle\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610753 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-config-data-custom\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610791 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a0e62fb-f82d-4585-8c51-9c3d947027e9-logs\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610822 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-combined-ca-bundle\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610867 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-logs\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610887 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvz8p\" (UniqueName: \"kubernetes.io/projected/3a0e62fb-f82d-4585-8c51-9c3d947027e9-kube-api-access-dvz8p\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610907 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-config-data\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610929 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-config-data\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.610948 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.613661 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a0e62fb-f82d-4585-8c51-9c3d947027e9-logs\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.621901 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-combined-ca-bundle\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.622279 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-logs\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.624302 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-config-data-custom\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.625279 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-config-data\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.625597 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a0e62fb-f82d-4585-8c51-9c3d947027e9-config-data\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.626716 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-combined-ca-bundle\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.635890 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-config-data-custom\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.639716 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjdz8\" (UniqueName: \"kubernetes.io/projected/e5a6ae13-4584-4438-a7eb-fd33a80e8ee7-kube-api-access-bjdz8\") pod \"barbican-keystone-listener-5f976d8c48-8849p\" (UID: \"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7\") " pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.647895 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvz8p\" (UniqueName: \"kubernetes.io/projected/3a0e62fb-f82d-4585-8c51-9c3d947027e9-kube-api-access-dvz8p\") pod \"barbican-worker-676bbb9799-m7pq6\" (UID: \"3a0e62fb-f82d-4585-8c51-9c3d947027e9\") " pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.713090 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.713139 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-config\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.713183 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.713207 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhgck\" (UniqueName: \"kubernetes.io/projected/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-kube-api-access-fhgck\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.713234 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.713330 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.714238 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.714393 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.714965 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.715001 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.725561 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-config\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.735834 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-575d5c9474-zgdcv"] Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.738139 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.752634 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhgck\" (UniqueName: \"kubernetes.io/projected/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-kube-api-access-fhgck\") pod \"dnsmasq-dns-848cf88cfc-6w5wf\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.753121 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.760407 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.770428 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-676bbb9799-m7pq6" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.801829 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-575d5c9474-zgdcv"] Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.816207 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97785c4a-071b-453d-b0ad-693c6934b43b-logs\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.816279 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msdw\" (UniqueName: \"kubernetes.io/projected/97785c4a-071b-453d-b0ad-693c6934b43b-kube-api-access-2msdw\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.816349 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.816382 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-combined-ca-bundle\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.816492 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data-custom\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.917879 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.917946 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-combined-ca-bundle\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.918048 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data-custom\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.918134 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97785c4a-071b-453d-b0ad-693c6934b43b-logs\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.918167 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2msdw\" (UniqueName: \"kubernetes.io/projected/97785c4a-071b-453d-b0ad-693c6934b43b-kube-api-access-2msdw\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.919007 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97785c4a-071b-453d-b0ad-693c6934b43b-logs\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.922090 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-combined-ca-bundle\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.922545 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data-custom\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.923974 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:23 crc kubenswrapper[4710]: I1128 17:19:23.938181 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2msdw\" (UniqueName: \"kubernetes.io/projected/97785c4a-071b-453d-b0ad-693c6934b43b-kube-api-access-2msdw\") pod \"barbican-api-575d5c9474-zgdcv\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:24 crc kubenswrapper[4710]: I1128 17:19:24.105594 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:24 crc kubenswrapper[4710]: I1128 17:19:24.180634 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:25 crc kubenswrapper[4710]: I1128 17:19:25.815192 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 17:19:25 crc kubenswrapper[4710]: I1128 17:19:25.815579 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 17:19:25 crc kubenswrapper[4710]: I1128 17:19:25.853871 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 17:19:25 crc kubenswrapper[4710]: I1128 17:19:25.860531 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.029000 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-757985fd5d-pvjnf"] Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.032935 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.037694 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.037858 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.046165 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-757985fd5d-pvjnf"] Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.158390 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-internal-tls-certs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.158456 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-combined-ca-bundle\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.158530 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-config-data-custom\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.158582 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-config-data\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.158628 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c052297b-c856-44c2-8fd2-66f76671785b-logs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.158655 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgp4l\" (UniqueName: \"kubernetes.io/projected/c052297b-c856-44c2-8fd2-66f76671785b-kube-api-access-xgp4l\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.158713 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-public-tls-certs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.260268 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-public-tls-certs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.260648 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-internal-tls-certs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.260806 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-combined-ca-bundle\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.260976 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-config-data-custom\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.261482 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-config-data\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.261768 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c052297b-c856-44c2-8fd2-66f76671785b-logs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.261928 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgp4l\" (UniqueName: \"kubernetes.io/projected/c052297b-c856-44c2-8fd2-66f76671785b-kube-api-access-xgp4l\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.262285 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c052297b-c856-44c2-8fd2-66f76671785b-logs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.266116 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-internal-tls-certs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.266662 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-config-data\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.269161 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-public-tls-certs\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.274454 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-combined-ca-bundle\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.274912 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c052297b-c856-44c2-8fd2-66f76671785b-config-data-custom\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.285517 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgp4l\" (UniqueName: \"kubernetes.io/projected/c052297b-c856-44c2-8fd2-66f76671785b-kube-api-access-xgp4l\") pod \"barbican-api-757985fd5d-pvjnf\" (UID: \"c052297b-c856-44c2-8fd2-66f76671785b\") " pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.353920 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.579798 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.580160 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.876511 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:26 crc kubenswrapper[4710]: I1128 17:19:26.877142 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.018376 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.020108 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.604969 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerStarted","Data":"ce5268e5ae2a72c54e285a5c9349555746169bd7341239bdd71d7dd4f9b913fd"} Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.605718 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.605742 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.605332 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="proxy-httpd" containerID="cri-o://ce5268e5ae2a72c54e285a5c9349555746169bd7341239bdd71d7dd4f9b913fd" gracePeriod=30 Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.605186 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="ceilometer-central-agent" containerID="cri-o://41fbf3acdb877076b8bbd2b71856051d28dfd2a86b1063d6888f70c29b5b1900" gracePeriod=30 Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.605284 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="sg-core" containerID="cri-o://9e12adecb1b3b33184238b5cb2c9c403c57d9e6c4a87289108280819023e39e5" gracePeriod=30 Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.605366 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="ceilometer-notification-agent" containerID="cri-o://b237dbb0b3fabc4a63137362a724cff366dd721fe61273e69c7ef147a8986356" gracePeriod=30 Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.640014 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.611001884 podStartE2EDuration="49.639993415s" podCreationTimestamp="2025-11-28 17:18:38 +0000 UTC" firstStartedPulling="2025-11-28 17:18:40.800684067 +0000 UTC m=+1210.058984112" lastFinishedPulling="2025-11-28 17:19:26.829675588 +0000 UTC m=+1256.087975643" observedRunningTime="2025-11-28 17:19:27.637985882 +0000 UTC m=+1256.896285937" watchObservedRunningTime="2025-11-28 17:19:27.639993415 +0000 UTC m=+1256.898293460" Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.689658 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-676bbb9799-m7pq6"] Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.702534 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-757985fd5d-pvjnf"] Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.721152 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-575d5c9474-zgdcv"] Nov 28 17:19:27 crc kubenswrapper[4710]: W1128 17:19:27.721830 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a0e62fb_f82d_4585_8c51_9c3d947027e9.slice/crio-f2a3c7c3e6d317cb654be5e1ded6c75c66f1734211c747ec6a1f863a11ed2bf7 WatchSource:0}: Error finding container f2a3c7c3e6d317cb654be5e1ded6c75c66f1734211c747ec6a1f863a11ed2bf7: Status 404 returned error can't find the container with id f2a3c7c3e6d317cb654be5e1ded6c75c66f1734211c747ec6a1f863a11ed2bf7 Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.727871 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f976d8c48-8849p"] Nov 28 17:19:27 crc kubenswrapper[4710]: I1128 17:19:27.784690 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6w5wf"] Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.615732 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" event={"ID":"e68f87dd-9d5b-4917-8a8b-1794e4f6668c","Type":"ContainerStarted","Data":"2ddc5dab554eb63329b76a128a6b5effc96fc389b2be325fd1b33107e85d1945"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.616291 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" event={"ID":"e68f87dd-9d5b-4917-8a8b-1794e4f6668c","Type":"ContainerStarted","Data":"a5331470fd54684a77bd6f99bc7ec3bc6e2a5ff45ac59b21da817e9f5bb3c8fb"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.618284 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f2xjj" event={"ID":"eedde5de-ead1-462b-a55f-3473c0f09f43","Type":"ContainerStarted","Data":"4af03b23471f9f2bd5093dfe34255de6e6c35f8acc71fefa583e1569cc1c3392"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.622117 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575d5c9474-zgdcv" event={"ID":"97785c4a-071b-453d-b0ad-693c6934b43b","Type":"ContainerStarted","Data":"de4e6c6a6a1cdf321ad744a1853e3d3c36af88184ee406bdce2c2f7cf5911245"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.622161 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575d5c9474-zgdcv" event={"ID":"97785c4a-071b-453d-b0ad-693c6934b43b","Type":"ContainerStarted","Data":"2e6f2224692151307b332e0956d6f28243a9eeb54c7fea1977c35ab2406fb7cd"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.627264 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" event={"ID":"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7","Type":"ContainerStarted","Data":"b784edc97426aaed3aaed4d7eb98c5fbb22425226e566d2c208df620541995e0"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.628148 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676bbb9799-m7pq6" event={"ID":"3a0e62fb-f82d-4585-8c51-9c3d947027e9","Type":"ContainerStarted","Data":"f2a3c7c3e6d317cb654be5e1ded6c75c66f1734211c747ec6a1f863a11ed2bf7"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.629576 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757985fd5d-pvjnf" event={"ID":"c052297b-c856-44c2-8fd2-66f76671785b","Type":"ContainerStarted","Data":"01a9e092772fff450ceced2f33b73ec6d546cc3e1e002686538cc5e614be79fc"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.629606 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757985fd5d-pvjnf" event={"ID":"c052297b-c856-44c2-8fd2-66f76671785b","Type":"ContainerStarted","Data":"b14a5530c97cf84eee5c003be4016f3da2171741ba771d1708caaa724c57d6cc"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.638470 4710 generic.go:334] "Generic (PLEG): container finished" podID="946b6bdb-75de-4047-a448-fb453e602b7f" containerID="ce5268e5ae2a72c54e285a5c9349555746169bd7341239bdd71d7dd4f9b913fd" exitCode=0 Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.638696 4710 generic.go:334] "Generic (PLEG): container finished" podID="946b6bdb-75de-4047-a448-fb453e602b7f" containerID="9e12adecb1b3b33184238b5cb2c9c403c57d9e6c4a87289108280819023e39e5" exitCode=2 Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.638770 4710 generic.go:334] "Generic (PLEG): container finished" podID="946b6bdb-75de-4047-a448-fb453e602b7f" containerID="41fbf3acdb877076b8bbd2b71856051d28dfd2a86b1063d6888f70c29b5b1900" exitCode=0 Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.639229 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerDied","Data":"ce5268e5ae2a72c54e285a5c9349555746169bd7341239bdd71d7dd4f9b913fd"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.639294 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerDied","Data":"9e12adecb1b3b33184238b5cb2c9c403c57d9e6c4a87289108280819023e39e5"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.639310 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerDied","Data":"41fbf3acdb877076b8bbd2b71856051d28dfd2a86b1063d6888f70c29b5b1900"} Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.646334 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-f2xjj" podStartSLOduration=3.971497772 podStartE2EDuration="50.646316935s" podCreationTimestamp="2025-11-28 17:18:38 +0000 UTC" firstStartedPulling="2025-11-28 17:18:40.10912313 +0000 UTC m=+1209.367423175" lastFinishedPulling="2025-11-28 17:19:26.783942293 +0000 UTC m=+1256.042242338" observedRunningTime="2025-11-28 17:19:28.637072706 +0000 UTC m=+1257.895372751" watchObservedRunningTime="2025-11-28 17:19:28.646316935 +0000 UTC m=+1257.904616980" Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.906916 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.907032 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:19:28 crc kubenswrapper[4710]: I1128 17:19:28.909163 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.657393 4710 generic.go:334] "Generic (PLEG): container finished" podID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" containerID="2ddc5dab554eb63329b76a128a6b5effc96fc389b2be325fd1b33107e85d1945" exitCode=0 Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.657574 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" event={"ID":"e68f87dd-9d5b-4917-8a8b-1794e4f6668c","Type":"ContainerDied","Data":"2ddc5dab554eb63329b76a128a6b5effc96fc389b2be325fd1b33107e85d1945"} Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.664629 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575d5c9474-zgdcv" event={"ID":"97785c4a-071b-453d-b0ad-693c6934b43b","Type":"ContainerStarted","Data":"f4356daaa3d8ecc8299b7f870f7b2d39aa6d347d88cad162099390f5e576b8c1"} Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.665517 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.665544 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.671090 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.671118 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.671929 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-757985fd5d-pvjnf" event={"ID":"c052297b-c856-44c2-8fd2-66f76671785b","Type":"ContainerStarted","Data":"27c3b35a19d465ab8cd4b2b39a290f9bccebd6407e1b67798238a91507b9b81e"} Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.671983 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.672001 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.710462 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-575d5c9474-zgdcv" podStartSLOduration=6.710439661 podStartE2EDuration="6.710439661s" podCreationTimestamp="2025-11-28 17:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:29.709978928 +0000 UTC m=+1258.968278973" watchObservedRunningTime="2025-11-28 17:19:29.710439661 +0000 UTC m=+1258.968739706" Nov 28 17:19:29 crc kubenswrapper[4710]: I1128 17:19:29.739912 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-757985fd5d-pvjnf" podStartSLOduration=4.739896157 podStartE2EDuration="4.739896157s" podCreationTimestamp="2025-11-28 17:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:29.73394609 +0000 UTC m=+1258.992246145" watchObservedRunningTime="2025-11-28 17:19:29.739896157 +0000 UTC m=+1258.998196202" Nov 28 17:19:30 crc kubenswrapper[4710]: I1128 17:19:30.059928 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:30 crc kubenswrapper[4710]: I1128 17:19:30.063473 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 17:19:30 crc kubenswrapper[4710]: I1128 17:19:30.692352 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" event={"ID":"e68f87dd-9d5b-4917-8a8b-1794e4f6668c","Type":"ContainerStarted","Data":"26cd6d46b677dc3e94585711274e17846f6f97e2415d5518b154f60f92a15968"} Nov 28 17:19:30 crc kubenswrapper[4710]: I1128 17:19:30.692729 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:30 crc kubenswrapper[4710]: I1128 17:19:30.725356 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" podStartSLOduration=7.725339282 podStartE2EDuration="7.725339282s" podCreationTimestamp="2025-11-28 17:19:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:30.723546766 +0000 UTC m=+1259.981846811" watchObservedRunningTime="2025-11-28 17:19:30.725339282 +0000 UTC m=+1259.983639327" Nov 28 17:19:31 crc kubenswrapper[4710]: E1128 17:19:31.547131 4710 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod946b6bdb_75de_4047_a448_fb453e602b7f.slice/crio-b237dbb0b3fabc4a63137362a724cff366dd721fe61273e69c7ef147a8986356.scope\": RecentStats: unable to find data in memory cache]" Nov 28 17:19:31 crc kubenswrapper[4710]: I1128 17:19:31.707817 4710 generic.go:334] "Generic (PLEG): container finished" podID="946b6bdb-75de-4047-a448-fb453e602b7f" containerID="b237dbb0b3fabc4a63137362a724cff366dd721fe61273e69c7ef147a8986356" exitCode=0 Nov 28 17:19:31 crc kubenswrapper[4710]: I1128 17:19:31.707871 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerDied","Data":"b237dbb0b3fabc4a63137362a724cff366dd721fe61273e69c7ef147a8986356"} Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.412812 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.516079 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmnhd\" (UniqueName: \"kubernetes.io/projected/946b6bdb-75de-4047-a448-fb453e602b7f-kube-api-access-tmnhd\") pod \"946b6bdb-75de-4047-a448-fb453e602b7f\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.516133 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-sg-core-conf-yaml\") pod \"946b6bdb-75de-4047-a448-fb453e602b7f\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.516165 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-config-data\") pod \"946b6bdb-75de-4047-a448-fb453e602b7f\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.516223 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-scripts\") pod \"946b6bdb-75de-4047-a448-fb453e602b7f\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.516413 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-run-httpd\") pod \"946b6bdb-75de-4047-a448-fb453e602b7f\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.516441 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-combined-ca-bundle\") pod \"946b6bdb-75de-4047-a448-fb453e602b7f\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.516466 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-log-httpd\") pod \"946b6bdb-75de-4047-a448-fb453e602b7f\" (UID: \"946b6bdb-75de-4047-a448-fb453e602b7f\") " Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.516989 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "946b6bdb-75de-4047-a448-fb453e602b7f" (UID: "946b6bdb-75de-4047-a448-fb453e602b7f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.517107 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "946b6bdb-75de-4047-a448-fb453e602b7f" (UID: "946b6bdb-75de-4047-a448-fb453e602b7f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.522392 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-scripts" (OuterVolumeSpecName: "scripts") pod "946b6bdb-75de-4047-a448-fb453e602b7f" (UID: "946b6bdb-75de-4047-a448-fb453e602b7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.522676 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/946b6bdb-75de-4047-a448-fb453e602b7f-kube-api-access-tmnhd" (OuterVolumeSpecName: "kube-api-access-tmnhd") pod "946b6bdb-75de-4047-a448-fb453e602b7f" (UID: "946b6bdb-75de-4047-a448-fb453e602b7f"). InnerVolumeSpecName "kube-api-access-tmnhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.559815 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "946b6bdb-75de-4047-a448-fb453e602b7f" (UID: "946b6bdb-75de-4047-a448-fb453e602b7f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.618472 4710 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.618516 4710 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/946b6bdb-75de-4047-a448-fb453e602b7f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.618529 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmnhd\" (UniqueName: \"kubernetes.io/projected/946b6bdb-75de-4047-a448-fb453e602b7f-kube-api-access-tmnhd\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.618542 4710 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.618554 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.635035 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "946b6bdb-75de-4047-a448-fb453e602b7f" (UID: "946b6bdb-75de-4047-a448-fb453e602b7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.659698 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-config-data" (OuterVolumeSpecName: "config-data") pod "946b6bdb-75de-4047-a448-fb453e602b7f" (UID: "946b6bdb-75de-4047-a448-fb453e602b7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.719957 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.719989 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/946b6bdb-75de-4047-a448-fb453e602b7f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.720043 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676bbb9799-m7pq6" event={"ID":"3a0e62fb-f82d-4585-8c51-9c3d947027e9","Type":"ContainerStarted","Data":"8620a05c2fc9704cbb1a7cdec1202b1e67c30ddc4772251fdfbc385afdd3c75a"} Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.720119 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-676bbb9799-m7pq6" event={"ID":"3a0e62fb-f82d-4585-8c51-9c3d947027e9","Type":"ContainerStarted","Data":"26387239abffc00c2ee1cfcc271e7334f85377061cb9d827219f9fe829bbcaf1"} Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.725775 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"946b6bdb-75de-4047-a448-fb453e602b7f","Type":"ContainerDied","Data":"735a302e446a7c8c4bdd569941fb04ba088b11abfeee7c1fffd75acb5fadf71c"} Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.726000 4710 scope.go:117] "RemoveContainer" containerID="ce5268e5ae2a72c54e285a5c9349555746169bd7341239bdd71d7dd4f9b913fd" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.725813 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.727561 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" event={"ID":"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7","Type":"ContainerStarted","Data":"749d803b8f7202938535c96153fd3c69e30b64453260699078dd387c5e8741a2"} Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.727597 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" event={"ID":"e5a6ae13-4584-4438-a7eb-fd33a80e8ee7","Type":"ContainerStarted","Data":"0cd8bf8fc3e5f3fa72e1d975557ac04a8ce9f03367b9f2b648fcad02fd136d0c"} Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.743691 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-676bbb9799-m7pq6" podStartSLOduration=5.82571128 podStartE2EDuration="9.743653062s" podCreationTimestamp="2025-11-28 17:19:23 +0000 UTC" firstStartedPulling="2025-11-28 17:19:27.75193925 +0000 UTC m=+1257.010239295" lastFinishedPulling="2025-11-28 17:19:31.669881022 +0000 UTC m=+1260.928181077" observedRunningTime="2025-11-28 17:19:32.741169004 +0000 UTC m=+1261.999469069" watchObservedRunningTime="2025-11-28 17:19:32.743653062 +0000 UTC m=+1262.001953127" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.755161 4710 scope.go:117] "RemoveContainer" containerID="9e12adecb1b3b33184238b5cb2c9c403c57d9e6c4a87289108280819023e39e5" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.764538 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f976d8c48-8849p" podStartSLOduration=5.819829874 podStartE2EDuration="9.764515016s" podCreationTimestamp="2025-11-28 17:19:23 +0000 UTC" firstStartedPulling="2025-11-28 17:19:27.727154992 +0000 UTC m=+1256.985455037" lastFinishedPulling="2025-11-28 17:19:31.671840124 +0000 UTC m=+1260.930140179" observedRunningTime="2025-11-28 17:19:32.758694894 +0000 UTC m=+1262.016994949" watchObservedRunningTime="2025-11-28 17:19:32.764515016 +0000 UTC m=+1262.022815061" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.789674 4710 scope.go:117] "RemoveContainer" containerID="b237dbb0b3fabc4a63137362a724cff366dd721fe61273e69c7ef147a8986356" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.797267 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.810360 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.829531 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:19:32 crc kubenswrapper[4710]: E1128 17:19:32.829957 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="ceilometer-notification-agent" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.829978 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="ceilometer-notification-agent" Nov 28 17:19:32 crc kubenswrapper[4710]: E1128 17:19:32.829995 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="proxy-httpd" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.830001 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="proxy-httpd" Nov 28 17:19:32 crc kubenswrapper[4710]: E1128 17:19:32.830021 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="sg-core" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.830026 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="sg-core" Nov 28 17:19:32 crc kubenswrapper[4710]: E1128 17:19:32.830038 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="ceilometer-central-agent" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.830044 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="ceilometer-central-agent" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.832631 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="proxy-httpd" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.832656 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="ceilometer-central-agent" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.832678 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="ceilometer-notification-agent" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.832695 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" containerName="sg-core" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.834990 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.837508 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.837848 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.840296 4710 scope.go:117] "RemoveContainer" containerID="41fbf3acdb877076b8bbd2b71856051d28dfd2a86b1063d6888f70c29b5b1900" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.852342 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.925258 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-config-data\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.925347 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.925367 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-scripts\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.925474 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-run-httpd\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.925496 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-log-httpd\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.925539 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:32 crc kubenswrapper[4710]: I1128 17:19:32.925566 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpsk8\" (UniqueName: \"kubernetes.io/projected/3d621306-f4b5-4cb5-a1b5-971a4444496a-kube-api-access-xpsk8\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.028771 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.028818 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-scripts\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.028908 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-run-httpd\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.028931 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-log-httpd\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.028963 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.028986 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpsk8\" (UniqueName: \"kubernetes.io/projected/3d621306-f4b5-4cb5-a1b5-971a4444496a-kube-api-access-xpsk8\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.029078 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-config-data\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.032833 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-run-httpd\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.033990 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-log-httpd\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.039443 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-config-data\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.050568 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.051600 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-scripts\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.059314 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.065510 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpsk8\" (UniqueName: \"kubernetes.io/projected/3d621306-f4b5-4cb5-a1b5-971a4444496a-kube-api-access-xpsk8\") pod \"ceilometer-0\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.159298 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="946b6bdb-75de-4047-a448-fb453e602b7f" path="/var/lib/kubelet/pods/946b6bdb-75de-4047-a448-fb453e602b7f/volumes" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.164458 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:19:33 crc kubenswrapper[4710]: I1128 17:19:33.745744 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:19:34 crc kubenswrapper[4710]: I1128 17:19:34.759014 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerStarted","Data":"e3768037e9213310d308863b2bb69166db77cc68ebad6efe6dcf9cb820fbbeee"} Nov 28 17:19:34 crc kubenswrapper[4710]: I1128 17:19:34.759359 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerStarted","Data":"2a05a7f75eb9f3be1a6c065caf30a2d9347491c41fe335394f65c988f47f0b47"} Nov 28 17:19:35 crc kubenswrapper[4710]: I1128 17:19:35.708366 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:35 crc kubenswrapper[4710]: I1128 17:19:35.752991 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:35 crc kubenswrapper[4710]: I1128 17:19:35.771688 4710 generic.go:334] "Generic (PLEG): container finished" podID="eedde5de-ead1-462b-a55f-3473c0f09f43" containerID="4af03b23471f9f2bd5093dfe34255de6e6c35f8acc71fefa583e1569cc1c3392" exitCode=0 Nov 28 17:19:35 crc kubenswrapper[4710]: I1128 17:19:35.771788 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f2xjj" event={"ID":"eedde5de-ead1-462b-a55f-3473c0f09f43","Type":"ContainerDied","Data":"4af03b23471f9f2bd5093dfe34255de6e6c35f8acc71fefa583e1569cc1c3392"} Nov 28 17:19:36 crc kubenswrapper[4710]: I1128 17:19:36.625888 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:36 crc kubenswrapper[4710]: I1128 17:19:36.810172 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerStarted","Data":"9efa9640b04942081a34e39c7ea123b9799f20254cd7e44378ed45d078a997ca"} Nov 28 17:19:36 crc kubenswrapper[4710]: I1128 17:19:36.810466 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerStarted","Data":"1b945c120ee5e0f0b518b53f408f0ba54246191b5bf7aece01f6916b3ed7dc6a"} Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.317333 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.357858 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-db-sync-config-data\") pod \"eedde5de-ead1-462b-a55f-3473c0f09f43\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.357991 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eedde5de-ead1-462b-a55f-3473c0f09f43-etc-machine-id\") pod \"eedde5de-ead1-462b-a55f-3473c0f09f43\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.358128 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-config-data\") pod \"eedde5de-ead1-462b-a55f-3473c0f09f43\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.358151 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-combined-ca-bundle\") pod \"eedde5de-ead1-462b-a55f-3473c0f09f43\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.358176 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-486tv\" (UniqueName: \"kubernetes.io/projected/eedde5de-ead1-462b-a55f-3473c0f09f43-kube-api-access-486tv\") pod \"eedde5de-ead1-462b-a55f-3473c0f09f43\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.358280 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-scripts\") pod \"eedde5de-ead1-462b-a55f-3473c0f09f43\" (UID: \"eedde5de-ead1-462b-a55f-3473c0f09f43\") " Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.373183 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eedde5de-ead1-462b-a55f-3473c0f09f43-kube-api-access-486tv" (OuterVolumeSpecName: "kube-api-access-486tv") pod "eedde5de-ead1-462b-a55f-3473c0f09f43" (UID: "eedde5de-ead1-462b-a55f-3473c0f09f43"). InnerVolumeSpecName "kube-api-access-486tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.373445 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eedde5de-ead1-462b-a55f-3473c0f09f43-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "eedde5de-ead1-462b-a55f-3473c0f09f43" (UID: "eedde5de-ead1-462b-a55f-3473c0f09f43"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.390016 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-scripts" (OuterVolumeSpecName: "scripts") pod "eedde5de-ead1-462b-a55f-3473c0f09f43" (UID: "eedde5de-ead1-462b-a55f-3473c0f09f43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.404521 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "eedde5de-ead1-462b-a55f-3473c0f09f43" (UID: "eedde5de-ead1-462b-a55f-3473c0f09f43"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.442879 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eedde5de-ead1-462b-a55f-3473c0f09f43" (UID: "eedde5de-ead1-462b-a55f-3473c0f09f43"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.449902 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-config-data" (OuterVolumeSpecName: "config-data") pod "eedde5de-ead1-462b-a55f-3473c0f09f43" (UID: "eedde5de-ead1-462b-a55f-3473c0f09f43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.461165 4710 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/eedde5de-ead1-462b-a55f-3473c0f09f43-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.461199 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.461208 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.461218 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-486tv\" (UniqueName: \"kubernetes.io/projected/eedde5de-ead1-462b-a55f-3473c0f09f43-kube-api-access-486tv\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.461228 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.461237 4710 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/eedde5de-ead1-462b-a55f-3473c0f09f43-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.821113 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-f2xjj" event={"ID":"eedde5de-ead1-462b-a55f-3473c0f09f43","Type":"ContainerDied","Data":"a936ac67d0bf036a9717cecd0a769101105ea7ce3fb97995f80338706ea50126"} Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.821152 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a936ac67d0bf036a9717cecd0a769101105ea7ce3fb97995f80338706ea50126" Nov 28 17:19:37 crc kubenswrapper[4710]: I1128 17:19:37.821216 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-f2xjj" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.200232 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6w5wf"] Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.200470 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" podUID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" containerName="dnsmasq-dns" containerID="cri-o://26cd6d46b677dc3e94585711274e17846f6f97e2415d5518b154f60f92a15968" gracePeriod=10 Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.202493 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.213530 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:38 crc kubenswrapper[4710]: E1128 17:19:38.214149 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eedde5de-ead1-462b-a55f-3473c0f09f43" containerName="cinder-db-sync" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.214169 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="eedde5de-ead1-462b-a55f-3473c0f09f43" containerName="cinder-db-sync" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.214467 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="eedde5de-ead1-462b-a55f-3473c0f09f43" containerName="cinder-db-sync" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.215917 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.218803 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7b762" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.219384 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.219642 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.224983 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.261265 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.282978 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.283035 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7pzc\" (UniqueName: \"kubernetes.io/projected/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-kube-api-access-g7pzc\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.283110 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.283128 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.283174 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.283207 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.376030 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-kjnkn"] Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.382293 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.388154 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.388194 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7pzc\" (UniqueName: \"kubernetes.io/projected/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-kube-api-access-g7pzc\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.388262 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.388279 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.388314 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.388351 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.389329 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.403748 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.404921 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.406003 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-scripts\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.411655 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.411863 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-kjnkn"] Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.441195 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7pzc\" (UniqueName: \"kubernetes.io/projected/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-kube-api-access-g7pzc\") pod \"cinder-scheduler-0\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.491417 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.491854 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.492161 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t8qc\" (UniqueName: \"kubernetes.io/projected/85ac3d96-65a4-4549-a26e-a12e06ae39af-kube-api-access-9t8qc\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.492308 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-config\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.492536 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-svc\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.492980 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.526997 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.529525 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.574515 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.583353 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.595490 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.595535 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-scripts\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.595588 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-svc\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.595636 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a9c794-98dd-4e4c-bd00-9c887d614b1a-logs\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.595676 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.595704 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87a9c794-98dd-4e4c-bd00-9c887d614b1a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.597330 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-svc\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.598264 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.599894 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.599999 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.600062 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwshf\" (UniqueName: \"kubernetes.io/projected/87a9c794-98dd-4e4c-bd00-9c887d614b1a-kube-api-access-kwshf\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.600240 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.600269 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.600408 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t8qc\" (UniqueName: \"kubernetes.io/projected/85ac3d96-65a4-4549-a26e-a12e06ae39af-kube-api-access-9t8qc\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.600837 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-config\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.601162 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data-custom\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.601488 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.601698 4710 scope.go:117] "RemoveContainer" containerID="e1273e0b41b57ab4032a66b94f5b8c9924238d18433a5bba10f52f7a197ae8da" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.608152 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.609459 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-config\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.629311 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t8qc\" (UniqueName: \"kubernetes.io/projected/85ac3d96-65a4-4549-a26e-a12e06ae39af-kube-api-access-9t8qc\") pod \"dnsmasq-dns-6578955fd5-kjnkn\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.709411 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data-custom\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.711840 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.711880 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-scripts\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.711979 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a9c794-98dd-4e4c-bd00-9c887d614b1a-logs\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.712058 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87a9c794-98dd-4e4c-bd00-9c887d614b1a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.712182 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwshf\" (UniqueName: \"kubernetes.io/projected/87a9c794-98dd-4e4c-bd00-9c887d614b1a-kube-api-access-kwshf\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.712316 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.719298 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.719622 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a9c794-98dd-4e4c-bd00-9c887d614b1a-logs\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.719690 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87a9c794-98dd-4e4c-bd00-9c887d614b1a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.724501 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-scripts\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.725172 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data-custom\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.729374 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.739374 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwshf\" (UniqueName: \"kubernetes.io/projected/87a9c794-98dd-4e4c-bd00-9c887d614b1a-kube-api-access-kwshf\") pod \"cinder-api-0\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " pod="openstack/cinder-api-0" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.852606 4710 generic.go:334] "Generic (PLEG): container finished" podID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" containerID="26cd6d46b677dc3e94585711274e17846f6f97e2415d5518b154f60f92a15968" exitCode=0 Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.852650 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" event={"ID":"e68f87dd-9d5b-4917-8a8b-1794e4f6668c","Type":"ContainerDied","Data":"26cd6d46b677dc3e94585711274e17846f6f97e2415d5518b154f60f92a15968"} Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.854429 4710 scope.go:117] "RemoveContainer" containerID="023da7b86f380165c28aec9c8d2469bc745368bf6ea21ddeede5b76ec019767c" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.908351 4710 scope.go:117] "RemoveContainer" containerID="1e148ba7f3258345877eb76352201a4632c7fd93a67f504e2ca230c2a7d8f61a" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.963278 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:38 crc kubenswrapper[4710]: I1128 17:19:38.964838 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.020219 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.042488 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.125624 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-svc\") pod \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.125738 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-nb\") pod \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.125964 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-swift-storage-0\") pod \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.126091 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-sb\") pod \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.126135 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhgck\" (UniqueName: \"kubernetes.io/projected/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-kube-api-access-fhgck\") pod \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.126214 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-config\") pod \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\" (UID: \"e68f87dd-9d5b-4917-8a8b-1794e4f6668c\") " Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.140294 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-757985fd5d-pvjnf" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.144481 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-kube-api-access-fhgck" (OuterVolumeSpecName: "kube-api-access-fhgck") pod "e68f87dd-9d5b-4917-8a8b-1794e4f6668c" (UID: "e68f87dd-9d5b-4917-8a8b-1794e4f6668c"). InnerVolumeSpecName "kube-api-access-fhgck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.238748 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhgck\" (UniqueName: \"kubernetes.io/projected/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-kube-api-access-fhgck\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.315408 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e68f87dd-9d5b-4917-8a8b-1794e4f6668c" (UID: "e68f87dd-9d5b-4917-8a8b-1794e4f6668c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.344330 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.346943 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-575d5c9474-zgdcv"] Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.347362 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-575d5c9474-zgdcv" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api-log" containerID="cri-o://de4e6c6a6a1cdf321ad744a1853e3d3c36af88184ee406bdce2c2f7cf5911245" gracePeriod=30 Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.347600 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e68f87dd-9d5b-4917-8a8b-1794e4f6668c" (UID: "e68f87dd-9d5b-4917-8a8b-1794e4f6668c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.347728 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-575d5c9474-zgdcv" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api" containerID="cri-o://f4356daaa3d8ecc8299b7f870f7b2d39aa6d347d88cad162099390f5e576b8c1" gracePeriod=30 Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.360659 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-575d5c9474-zgdcv" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.184:9311/healthcheck\": EOF" Nov 28 17:19:39 crc kubenswrapper[4710]: W1128 17:19:39.374776 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8eeb7cf0_ca13_40c0_a2f1_8089959a37e8.slice/crio-d30f2f7bd427eb272fb15fc7762d6b81bb172a85cf3eb5fa7a7025763d636c3f WatchSource:0}: Error finding container d30f2f7bd427eb272fb15fc7762d6b81bb172a85cf3eb5fa7a7025763d636c3f: Status 404 returned error can't find the container with id d30f2f7bd427eb272fb15fc7762d6b81bb172a85cf3eb5fa7a7025763d636c3f Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.376313 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e68f87dd-9d5b-4917-8a8b-1794e4f6668c" (UID: "e68f87dd-9d5b-4917-8a8b-1794e4f6668c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.382907 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.396909 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-config" (OuterVolumeSpecName: "config") pod "e68f87dd-9d5b-4917-8a8b-1794e4f6668c" (UID: "e68f87dd-9d5b-4917-8a8b-1794e4f6668c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.404942 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e68f87dd-9d5b-4917-8a8b-1794e4f6668c" (UID: "e68f87dd-9d5b-4917-8a8b-1794e4f6668c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.446155 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.446199 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.446215 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.446229 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e68f87dd-9d5b-4917-8a8b-1794e4f6668c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.716429 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-kjnkn"] Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.892911 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.896780 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" event={"ID":"e68f87dd-9d5b-4917-8a8b-1794e4f6668c","Type":"ContainerDied","Data":"a5331470fd54684a77bd6f99bc7ec3bc6e2a5ff45ac59b21da817e9f5bb3c8fb"} Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.896847 4710 scope.go:117] "RemoveContainer" containerID="26cd6d46b677dc3e94585711274e17846f6f97e2415d5518b154f60f92a15968" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.897026 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-6w5wf" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.904360 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerStarted","Data":"b0b1d5bc08b7cf55c02f06d0a5d7423e180adace9aca3b7907c754c2617f0845"} Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.904540 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.908944 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" event={"ID":"85ac3d96-65a4-4549-a26e-a12e06ae39af","Type":"ContainerStarted","Data":"142b3570e95dc8cd7200d391953d2f687a2acb8e2d70dc24ae7bf8693e6033e8"} Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.910710 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8","Type":"ContainerStarted","Data":"d30f2f7bd427eb272fb15fc7762d6b81bb172a85cf3eb5fa7a7025763d636c3f"} Nov 28 17:19:39 crc kubenswrapper[4710]: W1128 17:19:39.913643 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87a9c794_98dd_4e4c_bd00_9c887d614b1a.slice/crio-e53fa64301ee21b608367c7a5b8fb185b96695488c374eb76144ad8bc18ec452 WatchSource:0}: Error finding container e53fa64301ee21b608367c7a5b8fb185b96695488c374eb76144ad8bc18ec452: Status 404 returned error can't find the container with id e53fa64301ee21b608367c7a5b8fb185b96695488c374eb76144ad8bc18ec452 Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.919932 4710 generic.go:334] "Generic (PLEG): container finished" podID="97785c4a-071b-453d-b0ad-693c6934b43b" containerID="de4e6c6a6a1cdf321ad744a1853e3d3c36af88184ee406bdce2c2f7cf5911245" exitCode=143 Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.920654 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575d5c9474-zgdcv" event={"ID":"97785c4a-071b-453d-b0ad-693c6934b43b","Type":"ContainerDied","Data":"de4e6c6a6a1cdf321ad744a1853e3d3c36af88184ee406bdce2c2f7cf5911245"} Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.949838 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.435017968 podStartE2EDuration="7.949793212s" podCreationTimestamp="2025-11-28 17:19:32 +0000 UTC" firstStartedPulling="2025-11-28 17:19:33.752846833 +0000 UTC m=+1263.011146878" lastFinishedPulling="2025-11-28 17:19:38.267622077 +0000 UTC m=+1267.525922122" observedRunningTime="2025-11-28 17:19:39.931318742 +0000 UTC m=+1269.189618827" watchObservedRunningTime="2025-11-28 17:19:39.949793212 +0000 UTC m=+1269.208093257" Nov 28 17:19:39 crc kubenswrapper[4710]: I1128 17:19:39.966514 4710 scope.go:117] "RemoveContainer" containerID="2ddc5dab554eb63329b76a128a6b5effc96fc389b2be325fd1b33107e85d1945" Nov 28 17:19:40 crc kubenswrapper[4710]: I1128 17:19:39.998917 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6w5wf"] Nov 28 17:19:40 crc kubenswrapper[4710]: I1128 17:19:40.011473 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-6w5wf"] Nov 28 17:19:40 crc kubenswrapper[4710]: I1128 17:19:40.864426 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:19:40 crc kubenswrapper[4710]: I1128 17:19:40.951736 4710 generic.go:334] "Generic (PLEG): container finished" podID="85ac3d96-65a4-4549-a26e-a12e06ae39af" containerID="ab3cc35d6d6f50efbe7e1c67bf1d9aff0fcc0a5a49dc88e93a6e8f1244227a6e" exitCode=0 Nov 28 17:19:40 crc kubenswrapper[4710]: I1128 17:19:40.952029 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" event={"ID":"85ac3d96-65a4-4549-a26e-a12e06ae39af","Type":"ContainerDied","Data":"ab3cc35d6d6f50efbe7e1c67bf1d9aff0fcc0a5a49dc88e93a6e8f1244227a6e"} Nov 28 17:19:40 crc kubenswrapper[4710]: I1128 17:19:40.983947 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87a9c794-98dd-4e4c-bd00-9c887d614b1a","Type":"ContainerStarted","Data":"c09950b46b487f663145d843061c6fb7cb4cc856437d8ec5ce3a130b9a8c4e8c"} Nov 28 17:19:40 crc kubenswrapper[4710]: I1128 17:19:40.984224 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87a9c794-98dd-4e4c-bd00-9c887d614b1a","Type":"ContainerStarted","Data":"e53fa64301ee21b608367c7a5b8fb185b96695488c374eb76144ad8bc18ec452"} Nov 28 17:19:41 crc kubenswrapper[4710]: I1128 17:19:41.170728 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" path="/var/lib/kubelet/pods/e68f87dd-9d5b-4917-8a8b-1794e4f6668c/volumes" Nov 28 17:19:41 crc kubenswrapper[4710]: I1128 17:19:41.915070 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.018053 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8","Type":"ContainerStarted","Data":"29624f03592269c9f2a02555280a531861c47774277dab948975ef4c16dcec98"} Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.022034 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87a9c794-98dd-4e4c-bd00-9c887d614b1a","Type":"ContainerStarted","Data":"643e6ab79e908290f4b7feca23692019d91eeb9fb5cf9d88eb79e505e8bfdfda"} Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.022560 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerName="cinder-api-log" containerID="cri-o://c09950b46b487f663145d843061c6fb7cb4cc856437d8ec5ce3a130b9a8c4e8c" gracePeriod=30 Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.022708 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.023169 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerName="cinder-api" containerID="cri-o://643e6ab79e908290f4b7feca23692019d91eeb9fb5cf9d88eb79e505e8bfdfda" gracePeriod=30 Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.040969 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" event={"ID":"85ac3d96-65a4-4549-a26e-a12e06ae39af","Type":"ContainerStarted","Data":"e42555f5aa7f0e6dbfef4d03457e5ee72007d120c3f5f0f3c55859cf2844df33"} Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.041981 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.083415 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" podStartSLOduration=4.083394942 podStartE2EDuration="4.083394942s" podCreationTimestamp="2025-11-28 17:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:42.068749972 +0000 UTC m=+1271.327050017" watchObservedRunningTime="2025-11-28 17:19:42.083394942 +0000 UTC m=+1271.341694987" Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.096818 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.096798703 podStartE2EDuration="4.096798703s" podCreationTimestamp="2025-11-28 17:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:42.048861457 +0000 UTC m=+1271.307161502" watchObservedRunningTime="2025-11-28 17:19:42.096798703 +0000 UTC m=+1271.355098748" Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.107660 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-664bc7f8c8-z9vbx" Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.341599 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-654d6f49b5-qjswk" Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.427614 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58777d5fd4-xrcjb"] Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.427873 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58777d5fd4-xrcjb" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerName="neutron-api" containerID="cri-o://2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f" gracePeriod=30 Nov 28 17:19:42 crc kubenswrapper[4710]: I1128 17:19:42.428300 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-58777d5fd4-xrcjb" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerName="neutron-httpd" containerID="cri-o://26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12" gracePeriod=30 Nov 28 17:19:43 crc kubenswrapper[4710]: I1128 17:19:43.052691 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8","Type":"ContainerStarted","Data":"67c294c9dfc96ba1c3730ea600364600b77bd026dbae299d71c55964083f8fce"} Nov 28 17:19:43 crc kubenswrapper[4710]: I1128 17:19:43.055627 4710 generic.go:334] "Generic (PLEG): container finished" podID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerID="26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12" exitCode=0 Nov 28 17:19:43 crc kubenswrapper[4710]: I1128 17:19:43.055716 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58777d5fd4-xrcjb" event={"ID":"18baf4b3-8f80-42fa-8291-377b5ae88a92","Type":"ContainerDied","Data":"26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12"} Nov 28 17:19:43 crc kubenswrapper[4710]: I1128 17:19:43.059043 4710 generic.go:334] "Generic (PLEG): container finished" podID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerID="c09950b46b487f663145d843061c6fb7cb4cc856437d8ec5ce3a130b9a8c4e8c" exitCode=143 Nov 28 17:19:43 crc kubenswrapper[4710]: I1128 17:19:43.059223 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87a9c794-98dd-4e4c-bd00-9c887d614b1a","Type":"ContainerDied","Data":"c09950b46b487f663145d843061c6fb7cb4cc856437d8ec5ce3a130b9a8c4e8c"} Nov 28 17:19:43 crc kubenswrapper[4710]: I1128 17:19:43.077673 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.28042912 podStartE2EDuration="5.077642144s" podCreationTimestamp="2025-11-28 17:19:38 +0000 UTC" firstStartedPulling="2025-11-28 17:19:39.393153322 +0000 UTC m=+1268.651453367" lastFinishedPulling="2025-11-28 17:19:40.190366356 +0000 UTC m=+1269.448666391" observedRunningTime="2025-11-28 17:19:43.073746761 +0000 UTC m=+1272.332046806" watchObservedRunningTime="2025-11-28 17:19:43.077642144 +0000 UTC m=+1272.335942189" Nov 28 17:19:43 crc kubenswrapper[4710]: I1128 17:19:43.575352 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 17:19:44 crc kubenswrapper[4710]: I1128 17:19:44.766798 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-575d5c9474-zgdcv" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.184:9311/healthcheck\": read tcp 10.217.0.2:34440->10.217.0.184:9311: read: connection reset by peer" Nov 28 17:19:44 crc kubenswrapper[4710]: I1128 17:19:44.767007 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-575d5c9474-zgdcv" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.184:9311/healthcheck\": read tcp 10.217.0.2:34444->10.217.0.184:9311: read: connection reset by peer" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.098144 4710 generic.go:334] "Generic (PLEG): container finished" podID="97785c4a-071b-453d-b0ad-693c6934b43b" containerID="f4356daaa3d8ecc8299b7f870f7b2d39aa6d347d88cad162099390f5e576b8c1" exitCode=0 Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.098350 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575d5c9474-zgdcv" event={"ID":"97785c4a-071b-453d-b0ad-693c6934b43b","Type":"ContainerDied","Data":"f4356daaa3d8ecc8299b7f870f7b2d39aa6d347d88cad162099390f5e576b8c1"} Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.226346 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.354346 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data\") pod \"97785c4a-071b-453d-b0ad-693c6934b43b\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.354510 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97785c4a-071b-453d-b0ad-693c6934b43b-logs\") pod \"97785c4a-071b-453d-b0ad-693c6934b43b\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.354572 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2msdw\" (UniqueName: \"kubernetes.io/projected/97785c4a-071b-453d-b0ad-693c6934b43b-kube-api-access-2msdw\") pod \"97785c4a-071b-453d-b0ad-693c6934b43b\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.354621 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data-custom\") pod \"97785c4a-071b-453d-b0ad-693c6934b43b\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.354659 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-combined-ca-bundle\") pod \"97785c4a-071b-453d-b0ad-693c6934b43b\" (UID: \"97785c4a-071b-453d-b0ad-693c6934b43b\") " Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.366392 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97785c4a-071b-453d-b0ad-693c6934b43b-logs" (OuterVolumeSpecName: "logs") pod "97785c4a-071b-453d-b0ad-693c6934b43b" (UID: "97785c4a-071b-453d-b0ad-693c6934b43b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.372971 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97785c4a-071b-453d-b0ad-693c6934b43b-kube-api-access-2msdw" (OuterVolumeSpecName: "kube-api-access-2msdw") pod "97785c4a-071b-453d-b0ad-693c6934b43b" (UID: "97785c4a-071b-453d-b0ad-693c6934b43b"). InnerVolumeSpecName "kube-api-access-2msdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.373060 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "97785c4a-071b-453d-b0ad-693c6934b43b" (UID: "97785c4a-071b-453d-b0ad-693c6934b43b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.415379 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97785c4a-071b-453d-b0ad-693c6934b43b" (UID: "97785c4a-071b-453d-b0ad-693c6934b43b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.424989 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data" (OuterVolumeSpecName: "config-data") pod "97785c4a-071b-453d-b0ad-693c6934b43b" (UID: "97785c4a-071b-453d-b0ad-693c6934b43b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.457205 4710 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.457244 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.457257 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97785c4a-071b-453d-b0ad-693c6934b43b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.457271 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97785c4a-071b-453d-b0ad-693c6934b43b-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:45 crc kubenswrapper[4710]: I1128 17:19:45.457283 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2msdw\" (UniqueName: \"kubernetes.io/projected/97785c4a-071b-453d-b0ad-693c6934b43b-kube-api-access-2msdw\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:46 crc kubenswrapper[4710]: I1128 17:19:46.111719 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-575d5c9474-zgdcv" event={"ID":"97785c4a-071b-453d-b0ad-693c6934b43b","Type":"ContainerDied","Data":"2e6f2224692151307b332e0956d6f28243a9eeb54c7fea1977c35ab2406fb7cd"} Nov 28 17:19:46 crc kubenswrapper[4710]: I1128 17:19:46.111804 4710 scope.go:117] "RemoveContainer" containerID="f4356daaa3d8ecc8299b7f870f7b2d39aa6d347d88cad162099390f5e576b8c1" Nov 28 17:19:46 crc kubenswrapper[4710]: I1128 17:19:46.111801 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-575d5c9474-zgdcv" Nov 28 17:19:46 crc kubenswrapper[4710]: I1128 17:19:46.150343 4710 scope.go:117] "RemoveContainer" containerID="de4e6c6a6a1cdf321ad744a1853e3d3c36af88184ee406bdce2c2f7cf5911245" Nov 28 17:19:46 crc kubenswrapper[4710]: I1128 17:19:46.156867 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-575d5c9474-zgdcv"] Nov 28 17:19:46 crc kubenswrapper[4710]: I1128 17:19:46.166036 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-575d5c9474-zgdcv"] Nov 28 17:19:47 crc kubenswrapper[4710]: I1128 17:19:47.163328 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" path="/var/lib/kubelet/pods/97785c4a-071b-453d-b0ad-693c6934b43b/volumes" Nov 28 17:19:47 crc kubenswrapper[4710]: I1128 17:19:47.263107 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7559b9d56c-625td" Nov 28 17:19:48 crc kubenswrapper[4710]: I1128 17:19:48.779117 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 17:19:48 crc kubenswrapper[4710]: I1128 17:19:48.844871 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:48 crc kubenswrapper[4710]: I1128 17:19:48.966058 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.040118 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2w5q5"] Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.040405 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" podUID="94dc04af-7548-418b-ac27-1d7cf67a4501" containerName="dnsmasq-dns" containerID="cri-o://7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134" gracePeriod=10 Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.146477 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerName="cinder-scheduler" containerID="cri-o://29624f03592269c9f2a02555280a531861c47774277dab948975ef4c16dcec98" gracePeriod=30 Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.146598 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerName="probe" containerID="cri-o://67c294c9dfc96ba1c3730ea600364600b77bd026dbae299d71c55964083f8fce" gracePeriod=30 Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.712680 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.881629 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-config\") pod \"94dc04af-7548-418b-ac27-1d7cf67a4501\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.890270 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vj8c\" (UniqueName: \"kubernetes.io/projected/94dc04af-7548-418b-ac27-1d7cf67a4501-kube-api-access-5vj8c\") pod \"94dc04af-7548-418b-ac27-1d7cf67a4501\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.890423 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-sb\") pod \"94dc04af-7548-418b-ac27-1d7cf67a4501\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.890503 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-swift-storage-0\") pod \"94dc04af-7548-418b-ac27-1d7cf67a4501\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.890546 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-nb\") pod \"94dc04af-7548-418b-ac27-1d7cf67a4501\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.890582 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-svc\") pod \"94dc04af-7548-418b-ac27-1d7cf67a4501\" (UID: \"94dc04af-7548-418b-ac27-1d7cf67a4501\") " Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.905169 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94dc04af-7548-418b-ac27-1d7cf67a4501-kube-api-access-5vj8c" (OuterVolumeSpecName: "kube-api-access-5vj8c") pod "94dc04af-7548-418b-ac27-1d7cf67a4501" (UID: "94dc04af-7548-418b-ac27-1d7cf67a4501"). InnerVolumeSpecName "kube-api-access-5vj8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.974420 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "94dc04af-7548-418b-ac27-1d7cf67a4501" (UID: "94dc04af-7548-418b-ac27-1d7cf67a4501"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.990639 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "94dc04af-7548-418b-ac27-1d7cf67a4501" (UID: "94dc04af-7548-418b-ac27-1d7cf67a4501"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.996207 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vj8c\" (UniqueName: \"kubernetes.io/projected/94dc04af-7548-418b-ac27-1d7cf67a4501-kube-api-access-5vj8c\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.996241 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:49 crc kubenswrapper[4710]: I1128 17:19:49.996251 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.004307 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-config" (OuterVolumeSpecName: "config") pod "94dc04af-7548-418b-ac27-1d7cf67a4501" (UID: "94dc04af-7548-418b-ac27-1d7cf67a4501"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.010342 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "94dc04af-7548-418b-ac27-1d7cf67a4501" (UID: "94dc04af-7548-418b-ac27-1d7cf67a4501"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.010993 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "94dc04af-7548-418b-ac27-1d7cf67a4501" (UID: "94dc04af-7548-418b-ac27-1d7cf67a4501"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.098279 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.098313 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.098326 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/94dc04af-7548-418b-ac27-1d7cf67a4501-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.163515 4710 generic.go:334] "Generic (PLEG): container finished" podID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerID="67c294c9dfc96ba1c3730ea600364600b77bd026dbae299d71c55964083f8fce" exitCode=0 Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.163796 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8","Type":"ContainerDied","Data":"67c294c9dfc96ba1c3730ea600364600b77bd026dbae299d71c55964083f8fce"} Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.173224 4710 generic.go:334] "Generic (PLEG): container finished" podID="94dc04af-7548-418b-ac27-1d7cf67a4501" containerID="7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134" exitCode=0 Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.173260 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" event={"ID":"94dc04af-7548-418b-ac27-1d7cf67a4501","Type":"ContainerDied","Data":"7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134"} Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.173284 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" event={"ID":"94dc04af-7548-418b-ac27-1d7cf67a4501","Type":"ContainerDied","Data":"ea59de97fa4d83cd874c8ab0a495c5b108fb81ab22b325fc1d7e06080d084230"} Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.173299 4710 scope.go:117] "RemoveContainer" containerID="7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.173438 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-2w5q5" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.342897 4710 scope.go:117] "RemoveContainer" containerID="a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.343752 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2w5q5"] Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.359151 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-2w5q5"] Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.394201 4710 scope.go:117] "RemoveContainer" containerID="7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134" Nov 28 17:19:50 crc kubenswrapper[4710]: E1128 17:19:50.397136 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134\": container with ID starting with 7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134 not found: ID does not exist" containerID="7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.397178 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134"} err="failed to get container status \"7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134\": rpc error: code = NotFound desc = could not find container \"7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134\": container with ID starting with 7b384b3686caddd804b832f0a3f2f4958cefc59c14d2876b6a0861bf26f83134 not found: ID does not exist" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.397209 4710 scope.go:117] "RemoveContainer" containerID="a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b" Nov 28 17:19:50 crc kubenswrapper[4710]: E1128 17:19:50.397451 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b\": container with ID starting with a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b not found: ID does not exist" containerID="a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.397472 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b"} err="failed to get container status \"a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b\": rpc error: code = NotFound desc = could not find container \"a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b\": container with ID starting with a50b0985f060213b0ee659ef2cbb6878c1524abb5d2272f4d07b5500799bad8b not found: ID does not exist" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.885257 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.922419 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-config\") pod \"18baf4b3-8f80-42fa-8291-377b5ae88a92\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.922494 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh28s\" (UniqueName: \"kubernetes.io/projected/18baf4b3-8f80-42fa-8291-377b5ae88a92-kube-api-access-mh28s\") pod \"18baf4b3-8f80-42fa-8291-377b5ae88a92\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.922703 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-ovndb-tls-certs\") pod \"18baf4b3-8f80-42fa-8291-377b5ae88a92\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.922744 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-httpd-config\") pod \"18baf4b3-8f80-42fa-8291-377b5ae88a92\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.922861 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-combined-ca-bundle\") pod \"18baf4b3-8f80-42fa-8291-377b5ae88a92\" (UID: \"18baf4b3-8f80-42fa-8291-377b5ae88a92\") " Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.930863 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "18baf4b3-8f80-42fa-8291-377b5ae88a92" (UID: "18baf4b3-8f80-42fa-8291-377b5ae88a92"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.932950 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18baf4b3-8f80-42fa-8291-377b5ae88a92-kube-api-access-mh28s" (OuterVolumeSpecName: "kube-api-access-mh28s") pod "18baf4b3-8f80-42fa-8291-377b5ae88a92" (UID: "18baf4b3-8f80-42fa-8291-377b5ae88a92"). InnerVolumeSpecName "kube-api-access-mh28s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:50 crc kubenswrapper[4710]: I1128 17:19:50.996364 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18baf4b3-8f80-42fa-8291-377b5ae88a92" (UID: "18baf4b3-8f80-42fa-8291-377b5ae88a92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.001462 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-config" (OuterVolumeSpecName: "config") pod "18baf4b3-8f80-42fa-8291-377b5ae88a92" (UID: "18baf4b3-8f80-42fa-8291-377b5ae88a92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.025256 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.025293 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh28s\" (UniqueName: \"kubernetes.io/projected/18baf4b3-8f80-42fa-8291-377b5ae88a92-kube-api-access-mh28s\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.025303 4710 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.025313 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045141 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.045560 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" containerName="dnsmasq-dns" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045576 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" containerName="dnsmasq-dns" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.045596 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerName="neutron-api" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045603 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerName="neutron-api" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.045612 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerName="neutron-httpd" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045617 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerName="neutron-httpd" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.045630 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api-log" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045636 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api-log" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.045651 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94dc04af-7548-418b-ac27-1d7cf67a4501" containerName="init" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045657 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="94dc04af-7548-418b-ac27-1d7cf67a4501" containerName="init" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.045673 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94dc04af-7548-418b-ac27-1d7cf67a4501" containerName="dnsmasq-dns" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045685 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="94dc04af-7548-418b-ac27-1d7cf67a4501" containerName="dnsmasq-dns" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.045700 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045706 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.045720 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" containerName="init" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045726 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" containerName="init" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045915 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68f87dd-9d5b-4917-8a8b-1794e4f6668c" containerName="dnsmasq-dns" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045922 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045944 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerName="neutron-httpd" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045953 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerName="neutron-api" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045964 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="97785c4a-071b-453d-b0ad-693c6934b43b" containerName="barbican-api-log" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.045975 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="94dc04af-7548-418b-ac27-1d7cf67a4501" containerName="dnsmasq-dns" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.046587 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.053232 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.053344 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-qjqwm" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.053710 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.063429 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "18baf4b3-8f80-42fa-8291-377b5ae88a92" (UID: "18baf4b3-8f80-42fa-8291-377b5ae88a92"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.074522 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.127200 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4795b5d0-66f8-4392-8496-494fad8e7e69-openstack-config\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.127569 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4795b5d0-66f8-4392-8496-494fad8e7e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.127686 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4795b5d0-66f8-4392-8496-494fad8e7e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.127803 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlwhc\" (UniqueName: \"kubernetes.io/projected/4795b5d0-66f8-4392-8496-494fad8e7e69-kube-api-access-tlwhc\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.127972 4710 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/18baf4b3-8f80-42fa-8291-377b5ae88a92-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.156889 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94dc04af-7548-418b-ac27-1d7cf67a4501" path="/var/lib/kubelet/pods/94dc04af-7548-418b-ac27-1d7cf67a4501/volumes" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.184929 4710 generic.go:334] "Generic (PLEG): container finished" podID="18baf4b3-8f80-42fa-8291-377b5ae88a92" containerID="2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f" exitCode=0 Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.185000 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58777d5fd4-xrcjb" event={"ID":"18baf4b3-8f80-42fa-8291-377b5ae88a92","Type":"ContainerDied","Data":"2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f"} Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.185035 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58777d5fd4-xrcjb" event={"ID":"18baf4b3-8f80-42fa-8291-377b5ae88a92","Type":"ContainerDied","Data":"466005bb4ac669fe52e9cd930a3cd4c5f5849bd7260166d1c1753867367d0a4e"} Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.185074 4710 scope.go:117] "RemoveContainer" containerID="26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.185195 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58777d5fd4-xrcjb" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.229347 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4795b5d0-66f8-4392-8496-494fad8e7e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.229400 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlwhc\" (UniqueName: \"kubernetes.io/projected/4795b5d0-66f8-4392-8496-494fad8e7e69-kube-api-access-tlwhc\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.229499 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4795b5d0-66f8-4392-8496-494fad8e7e69-openstack-config\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.229587 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4795b5d0-66f8-4392-8496-494fad8e7e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.233406 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/4795b5d0-66f8-4392-8496-494fad8e7e69-openstack-config\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.234155 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/4795b5d0-66f8-4392-8496-494fad8e7e69-openstack-config-secret\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.236803 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4795b5d0-66f8-4392-8496-494fad8e7e69-combined-ca-bundle\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.249152 4710 scope.go:117] "RemoveContainer" containerID="2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.254163 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlwhc\" (UniqueName: \"kubernetes.io/projected/4795b5d0-66f8-4392-8496-494fad8e7e69-kube-api-access-tlwhc\") pod \"openstackclient\" (UID: \"4795b5d0-66f8-4392-8496-494fad8e7e69\") " pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.258175 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-58777d5fd4-xrcjb"] Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.268535 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-58777d5fd4-xrcjb"] Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.353394 4710 scope.go:117] "RemoveContainer" containerID="26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.353935 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12\": container with ID starting with 26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12 not found: ID does not exist" containerID="26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.353971 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12"} err="failed to get container status \"26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12\": rpc error: code = NotFound desc = could not find container \"26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12\": container with ID starting with 26c7915c8e8be3f687d0106e92bd3d7f4285b47596b8934778a9dbb8115eaa12 not found: ID does not exist" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.354744 4710 scope.go:117] "RemoveContainer" containerID="2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f" Nov 28 17:19:51 crc kubenswrapper[4710]: E1128 17:19:51.355027 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f\": container with ID starting with 2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f not found: ID does not exist" containerID="2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.355055 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f"} err="failed to get container status \"2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f\": rpc error: code = NotFound desc = could not find container \"2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f\": container with ID starting with 2d6bc315b3259416b41c19d0684d517142ba7b6342ad6fdce815ff1243bdb56f not found: ID does not exist" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.380474 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.864227 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 17:19:51 crc kubenswrapper[4710]: I1128 17:19:51.892983 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 28 17:19:52 crc kubenswrapper[4710]: I1128 17:19:52.212773 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4795b5d0-66f8-4392-8496-494fad8e7e69","Type":"ContainerStarted","Data":"dc0a0e5ee223724d0a2cadc1c83a7da25f1f62e243d9d896bf3663917b419d01"} Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.156693 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18baf4b3-8f80-42fa-8291-377b5ae88a92" path="/var/lib/kubelet/pods/18baf4b3-8f80-42fa-8291-377b5ae88a92/volumes" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.234843 4710 generic.go:334] "Generic (PLEG): container finished" podID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerID="29624f03592269c9f2a02555280a531861c47774277dab948975ef4c16dcec98" exitCode=0 Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.234923 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8","Type":"ContainerDied","Data":"29624f03592269c9f2a02555280a531861c47774277dab948975ef4c16dcec98"} Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.530615 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.685374 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-scripts\") pod \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.685484 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-combined-ca-bundle\") pod \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.685520 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data-custom\") pod \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.685630 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data\") pod \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.685691 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-etc-machine-id\") pod \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.685821 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7pzc\" (UniqueName: \"kubernetes.io/projected/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-kube-api-access-g7pzc\") pod \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\" (UID: \"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8\") " Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.685895 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" (UID: "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.686834 4710 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.691891 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-kube-api-access-g7pzc" (OuterVolumeSpecName: "kube-api-access-g7pzc") pod "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" (UID: "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8"). InnerVolumeSpecName "kube-api-access-g7pzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.692620 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-scripts" (OuterVolumeSpecName: "scripts") pod "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" (UID: "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.692703 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" (UID: "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.763308 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" (UID: "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.788698 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.788730 4710 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.788741 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.788750 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7pzc\" (UniqueName: \"kubernetes.io/projected/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-kube-api-access-g7pzc\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.808771 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data" (OuterVolumeSpecName: "config-data") pod "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" (UID: "8eeb7cf0-ca13-40c0-a2f1-8089959a37e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:19:53 crc kubenswrapper[4710]: I1128 17:19:53.890863 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.257571 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8eeb7cf0-ca13-40c0-a2f1-8089959a37e8","Type":"ContainerDied","Data":"d30f2f7bd427eb272fb15fc7762d6b81bb172a85cf3eb5fa7a7025763d636c3f"} Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.257628 4710 scope.go:117] "RemoveContainer" containerID="67c294c9dfc96ba1c3730ea600364600b77bd026dbae299d71c55964083f8fce" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.257645 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.313422 4710 scope.go:117] "RemoveContainer" containerID="29624f03592269c9f2a02555280a531861c47774277dab948975ef4c16dcec98" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.320414 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.345176 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.368638 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:54 crc kubenswrapper[4710]: E1128 17:19:54.369228 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerName="cinder-scheduler" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.369252 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerName="cinder-scheduler" Nov 28 17:19:54 crc kubenswrapper[4710]: E1128 17:19:54.369282 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerName="probe" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.369290 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerName="probe" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.369549 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerName="cinder-scheduler" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.369565 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" containerName="probe" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.370957 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.372934 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.376384 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.405752 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.405872 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.405911 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.405950 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dhj\" (UniqueName: \"kubernetes.io/projected/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-kube-api-access-p7dhj\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.406030 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.406063 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.507620 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.507694 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.507731 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.507806 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.507796 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.507839 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.508006 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dhj\" (UniqueName: \"kubernetes.io/projected/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-kube-api-access-p7dhj\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.511834 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-scripts\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.518442 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.523292 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-config-data\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.523959 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.526749 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dhj\" (UniqueName: \"kubernetes.io/projected/7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5-kube-api-access-p7dhj\") pod \"cinder-scheduler-0\" (UID: \"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5\") " pod="openstack/cinder-scheduler-0" Nov 28 17:19:54 crc kubenswrapper[4710]: I1128 17:19:54.704893 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.159507 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eeb7cf0-ca13-40c0-a2f1-8089959a37e8" path="/var/lib/kubelet/pods/8eeb7cf0-ca13-40c0-a2f1-8089959a37e8/volumes" Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.210001 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 28 17:19:55 crc kubenswrapper[4710]: W1128 17:19:55.236638 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dcb222e_0e19_4ab3_bb78_a7b8ebc23aa5.slice/crio-a18f1ecbff819637e8ec6be57692b2eba349737ac9aba18ad979509d726bb445 WatchSource:0}: Error finding container a18f1ecbff819637e8ec6be57692b2eba349737ac9aba18ad979509d726bb445: Status 404 returned error can't find the container with id a18f1ecbff819637e8ec6be57692b2eba349737ac9aba18ad979509d726bb445 Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.278895 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5","Type":"ContainerStarted","Data":"a18f1ecbff819637e8ec6be57692b2eba349737ac9aba18ad979509d726bb445"} Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.950775 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6459d5bc5f-vhnpr"] Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.954109 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.957321 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.957494 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.957572 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 28 17:19:55 crc kubenswrapper[4710]: I1128 17:19:55.981045 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6459d5bc5f-vhnpr"] Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.045786 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-public-tls-certs\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.045849 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-config-data\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.045988 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-combined-ca-bundle\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.046074 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56843354-a30a-4997-8f6f-0210e3980dc4-etc-swift\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.046327 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-internal-tls-certs\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.046382 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56843354-a30a-4997-8f6f-0210e3980dc4-run-httpd\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.046537 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56843354-a30a-4997-8f6f-0210e3980dc4-log-httpd\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.046867 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27v9t\" (UniqueName: \"kubernetes.io/projected/56843354-a30a-4997-8f6f-0210e3980dc4-kube-api-access-27v9t\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.147646 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27v9t\" (UniqueName: \"kubernetes.io/projected/56843354-a30a-4997-8f6f-0210e3980dc4-kube-api-access-27v9t\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.147701 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-public-tls-certs\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.147733 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-config-data\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.147802 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-combined-ca-bundle\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.147851 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56843354-a30a-4997-8f6f-0210e3980dc4-etc-swift\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.147903 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-internal-tls-certs\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.147925 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56843354-a30a-4997-8f6f-0210e3980dc4-run-httpd\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.147972 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56843354-a30a-4997-8f6f-0210e3980dc4-log-httpd\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.148414 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56843354-a30a-4997-8f6f-0210e3980dc4-log-httpd\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.152872 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/56843354-a30a-4997-8f6f-0210e3980dc4-run-httpd\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.153722 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-public-tls-certs\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.155808 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/56843354-a30a-4997-8f6f-0210e3980dc4-etc-swift\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.156154 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-combined-ca-bundle\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.159559 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-internal-tls-certs\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.170136 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27v9t\" (UniqueName: \"kubernetes.io/projected/56843354-a30a-4997-8f6f-0210e3980dc4-kube-api-access-27v9t\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.174203 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56843354-a30a-4997-8f6f-0210e3980dc4-config-data\") pod \"swift-proxy-6459d5bc5f-vhnpr\" (UID: \"56843354-a30a-4997-8f6f-0210e3980dc4\") " pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.289342 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5","Type":"ContainerStarted","Data":"39130ae784774bccd9be02ecbafba8dabec5a9f4e66fa16b097b68bea166fd8e"} Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.324912 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.476341 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.476829 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="ceilometer-central-agent" containerID="cri-o://e3768037e9213310d308863b2bb69166db77cc68ebad6efe6dcf9cb820fbbeee" gracePeriod=30 Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.477264 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="sg-core" containerID="cri-o://9efa9640b04942081a34e39c7ea123b9799f20254cd7e44378ed45d078a997ca" gracePeriod=30 Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.477436 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="proxy-httpd" containerID="cri-o://b0b1d5bc08b7cf55c02f06d0a5d7423e180adace9aca3b7907c754c2617f0845" gracePeriod=30 Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.477488 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="ceilometer-notification-agent" containerID="cri-o://1b945c120ee5e0f0b518b53f408f0ba54246191b5bf7aece01f6916b3ed7dc6a" gracePeriod=30 Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.509107 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.186:3000/\": EOF" Nov 28 17:19:56 crc kubenswrapper[4710]: I1128 17:19:56.949807 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6459d5bc5f-vhnpr"] Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.309383 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5","Type":"ContainerStarted","Data":"37af130ab2ce723ffe4500522101e7a7700fdff91128753ba188b46ce895331f"} Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.315087 4710 generic.go:334] "Generic (PLEG): container finished" podID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerID="b0b1d5bc08b7cf55c02f06d0a5d7423e180adace9aca3b7907c754c2617f0845" exitCode=0 Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.315124 4710 generic.go:334] "Generic (PLEG): container finished" podID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerID="9efa9640b04942081a34e39c7ea123b9799f20254cd7e44378ed45d078a997ca" exitCode=2 Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.315136 4710 generic.go:334] "Generic (PLEG): container finished" podID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerID="e3768037e9213310d308863b2bb69166db77cc68ebad6efe6dcf9cb820fbbeee" exitCode=0 Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.315159 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerDied","Data":"b0b1d5bc08b7cf55c02f06d0a5d7423e180adace9aca3b7907c754c2617f0845"} Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.315216 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerDied","Data":"9efa9640b04942081a34e39c7ea123b9799f20254cd7e44378ed45d078a997ca"} Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.315229 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerDied","Data":"e3768037e9213310d308863b2bb69166db77cc68ebad6efe6dcf9cb820fbbeee"} Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.339392 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.339372726 podStartE2EDuration="3.339372726s" podCreationTimestamp="2025-11-28 17:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:19:57.326024297 +0000 UTC m=+1286.584324362" watchObservedRunningTime="2025-11-28 17:19:57.339372726 +0000 UTC m=+1286.597672771" Nov 28 17:19:57 crc kubenswrapper[4710]: I1128 17:19:57.342087 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" event={"ID":"56843354-a30a-4997-8f6f-0210e3980dc4","Type":"ContainerStarted","Data":"c58518f87e9d7e1b64143431c21bc3d40f3b068e4fa9fc84db4445217a6ed7d0"} Nov 28 17:19:59 crc kubenswrapper[4710]: I1128 17:19:59.364938 4710 generic.go:334] "Generic (PLEG): container finished" podID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerID="1b945c120ee5e0f0b518b53f408f0ba54246191b5bf7aece01f6916b3ed7dc6a" exitCode=0 Nov 28 17:19:59 crc kubenswrapper[4710]: I1128 17:19:59.365217 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerDied","Data":"1b945c120ee5e0f0b518b53f408f0ba54246191b5bf7aece01f6916b3ed7dc6a"} Nov 28 17:19:59 crc kubenswrapper[4710]: I1128 17:19:59.706724 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.069800 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.070653 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerName="glance-httpd" containerID="cri-o://509270d07e24efd376b2c6dcbf5dcc8eb1474d6d025d23f660fc2ddadf42a597" gracePeriod=30 Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.070681 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerName="glance-log" containerID="cri-o://629863b32cabb090c5f186c7a3eec3329a75a9a9b11963440dfac8179015b25b" gracePeriod=30 Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.165851 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.186:3000/\": dial tcp 10.217.0.186:3000: connect: connection refused" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.423889 4710 generic.go:334] "Generic (PLEG): container finished" podID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerID="629863b32cabb090c5f186c7a3eec3329a75a9a9b11963440dfac8179015b25b" exitCode=143 Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.424638 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"661e3628-1a58-4dda-8cb6-c07c13c5b7f3","Type":"ContainerDied","Data":"629863b32cabb090c5f186c7a3eec3329a75a9a9b11963440dfac8179015b25b"} Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.519490 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.705454 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-run-httpd\") pod \"3d621306-f4b5-4cb5-a1b5-971a4444496a\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.705860 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-scripts\") pod \"3d621306-f4b5-4cb5-a1b5-971a4444496a\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.706022 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpsk8\" (UniqueName: \"kubernetes.io/projected/3d621306-f4b5-4cb5-a1b5-971a4444496a-kube-api-access-xpsk8\") pod \"3d621306-f4b5-4cb5-a1b5-971a4444496a\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.706043 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-config-data\") pod \"3d621306-f4b5-4cb5-a1b5-971a4444496a\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.706084 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-combined-ca-bundle\") pod \"3d621306-f4b5-4cb5-a1b5-971a4444496a\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.706103 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-log-httpd\") pod \"3d621306-f4b5-4cb5-a1b5-971a4444496a\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.706132 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-sg-core-conf-yaml\") pod \"3d621306-f4b5-4cb5-a1b5-971a4444496a\" (UID: \"3d621306-f4b5-4cb5-a1b5-971a4444496a\") " Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.707357 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3d621306-f4b5-4cb5-a1b5-971a4444496a" (UID: "3d621306-f4b5-4cb5-a1b5-971a4444496a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.707432 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3d621306-f4b5-4cb5-a1b5-971a4444496a" (UID: "3d621306-f4b5-4cb5-a1b5-971a4444496a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.712343 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d621306-f4b5-4cb5-a1b5-971a4444496a-kube-api-access-xpsk8" (OuterVolumeSpecName: "kube-api-access-xpsk8") pod "3d621306-f4b5-4cb5-a1b5-971a4444496a" (UID: "3d621306-f4b5-4cb5-a1b5-971a4444496a"). InnerVolumeSpecName "kube-api-access-xpsk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.712385 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-scripts" (OuterVolumeSpecName: "scripts") pod "3d621306-f4b5-4cb5-a1b5-971a4444496a" (UID: "3d621306-f4b5-4cb5-a1b5-971a4444496a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.735266 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3d621306-f4b5-4cb5-a1b5-971a4444496a" (UID: "3d621306-f4b5-4cb5-a1b5-971a4444496a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.783419 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d621306-f4b5-4cb5-a1b5-971a4444496a" (UID: "3d621306-f4b5-4cb5-a1b5-971a4444496a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.809795 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpsk8\" (UniqueName: \"kubernetes.io/projected/3d621306-f4b5-4cb5-a1b5-971a4444496a-kube-api-access-xpsk8\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.809833 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.809845 4710 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.809856 4710 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.809870 4710 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3d621306-f4b5-4cb5-a1b5-971a4444496a-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.809881 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.820008 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-config-data" (OuterVolumeSpecName: "config-data") pod "3d621306-f4b5-4cb5-a1b5-971a4444496a" (UID: "3d621306-f4b5-4cb5-a1b5-971a4444496a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:03 crc kubenswrapper[4710]: I1128 17:20:03.911538 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d621306-f4b5-4cb5-a1b5-971a4444496a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.438905 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3d621306-f4b5-4cb5-a1b5-971a4444496a","Type":"ContainerDied","Data":"2a05a7f75eb9f3be1a6c065caf30a2d9347491c41fe335394f65c988f47f0b47"} Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.438965 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.438972 4710 scope.go:117] "RemoveContainer" containerID="b0b1d5bc08b7cf55c02f06d0a5d7423e180adace9aca3b7907c754c2617f0845" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.441040 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"4795b5d0-66f8-4392-8496-494fad8e7e69","Type":"ContainerStarted","Data":"5b19d7c2f1c88ec2275042f98e2567852256334258ab11e523e4aa0d102c37a0"} Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.444661 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" event={"ID":"56843354-a30a-4997-8f6f-0210e3980dc4","Type":"ContainerStarted","Data":"4fa85243936830cfb53af21012755116a79758e692869c6784f1e1f68673bedf"} Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.444728 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" event={"ID":"56843354-a30a-4997-8f6f-0210e3980dc4","Type":"ContainerStarted","Data":"72fca70e4163770c3b4b26446c151a90c5677cc73c968fababbc44ceddeae8e6"} Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.444788 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.467198 4710 scope.go:117] "RemoveContainer" containerID="9efa9640b04942081a34e39c7ea123b9799f20254cd7e44378ed45d078a997ca" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.468144 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.103324453 podStartE2EDuration="13.468126006s" podCreationTimestamp="2025-11-28 17:19:51 +0000 UTC" firstStartedPulling="2025-11-28 17:19:51.919563581 +0000 UTC m=+1281.177863626" lastFinishedPulling="2025-11-28 17:20:03.284365134 +0000 UTC m=+1292.542665179" observedRunningTime="2025-11-28 17:20:04.459921999 +0000 UTC m=+1293.718222054" watchObservedRunningTime="2025-11-28 17:20:04.468126006 +0000 UTC m=+1293.726426051" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.499406 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" podStartSLOduration=9.499389828 podStartE2EDuration="9.499389828s" podCreationTimestamp="2025-11-28 17:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:04.494511875 +0000 UTC m=+1293.752811950" watchObservedRunningTime="2025-11-28 17:20:04.499389828 +0000 UTC m=+1293.757689873" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.572938 4710 scope.go:117] "RemoveContainer" containerID="1b945c120ee5e0f0b518b53f408f0ba54246191b5bf7aece01f6916b3ed7dc6a" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.600379 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.616215 4710 scope.go:117] "RemoveContainer" containerID="e3768037e9213310d308863b2bb69166db77cc68ebad6efe6dcf9cb820fbbeee" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.623182 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.637865 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:04 crc kubenswrapper[4710]: E1128 17:20:04.638451 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="sg-core" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.638477 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="sg-core" Nov 28 17:20:04 crc kubenswrapper[4710]: E1128 17:20:04.638506 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="proxy-httpd" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.638515 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="proxy-httpd" Nov 28 17:20:04 crc kubenswrapper[4710]: E1128 17:20:04.638528 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="ceilometer-central-agent" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.638536 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="ceilometer-central-agent" Nov 28 17:20:04 crc kubenswrapper[4710]: E1128 17:20:04.638547 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="ceilometer-notification-agent" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.638555 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="ceilometer-notification-agent" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.638864 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="proxy-httpd" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.638893 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="ceilometer-notification-agent" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.638909 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="ceilometer-central-agent" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.638925 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" containerName="sg-core" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.641323 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.643623 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.644685 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.647967 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.827455 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-config-data\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.827512 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg5t4\" (UniqueName: \"kubernetes.io/projected/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-kube-api-access-pg5t4\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.827566 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.827598 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-log-httpd\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.827647 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-run-httpd\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.827712 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.827774 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-scripts\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.929988 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.930098 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-scripts\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.930176 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-config-data\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.930207 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg5t4\" (UniqueName: \"kubernetes.io/projected/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-kube-api-access-pg5t4\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.930275 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.930311 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-log-httpd\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.931147 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-log-httpd\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.931260 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-run-httpd\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.931561 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-run-httpd\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.936167 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-config-data\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.937541 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.939563 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.951625 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-scripts\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:04 crc kubenswrapper[4710]: I1128 17:20:04.963570 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg5t4\" (UniqueName: \"kubernetes.io/projected/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-kube-api-access-pg5t4\") pod \"ceilometer-0\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " pod="openstack/ceilometer-0" Nov 28 17:20:05 crc kubenswrapper[4710]: I1128 17:20:05.156944 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d621306-f4b5-4cb5-a1b5-971a4444496a" path="/var/lib/kubelet/pods/3d621306-f4b5-4cb5-a1b5-971a4444496a/volumes" Nov 28 17:20:05 crc kubenswrapper[4710]: I1128 17:20:05.158377 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 28 17:20:05 crc kubenswrapper[4710]: I1128 17:20:05.262310 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:05 crc kubenswrapper[4710]: I1128 17:20:05.483697 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:20:05 crc kubenswrapper[4710]: I1128 17:20:05.862836 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:06 crc kubenswrapper[4710]: I1128 17:20:06.496874 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerStarted","Data":"e9aefd0a3332f22b83671025f055dc9b72a9e0cc70e9bf658736fe35fd3e7d94"} Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.210534 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.226238 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerName="glance-log" containerID="cri-o://86b3dfd43dbd66f7b02cf8515b1bff02bbc8ae27511e132ec3c0b461f4a4d40e" gracePeriod=30 Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.226450 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerName="glance-httpd" containerID="cri-o://25cd53bf119f2d67e1e659a9f155f09ff66968f2331d5606757803945df5375c" gracePeriod=30 Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.412994 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-xzg2r"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.414621 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.444273 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xzg2r"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.512818 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xbg2v"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.515824 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.552598 4710 generic.go:334] "Generic (PLEG): container finished" podID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerID="509270d07e24efd376b2c6dcbf5dcc8eb1474d6d025d23f660fc2ddadf42a597" exitCode=0 Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.552670 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"661e3628-1a58-4dda-8cb6-c07c13c5b7f3","Type":"ContainerDied","Data":"509270d07e24efd376b2c6dcbf5dcc8eb1474d6d025d23f660fc2ddadf42a597"} Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.558118 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-6103-account-create-update-5v27b"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.561114 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.574120 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.577667 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xbg2v"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.582988 4710 generic.go:334] "Generic (PLEG): container finished" podID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerID="86b3dfd43dbd66f7b02cf8515b1bff02bbc8ae27511e132ec3c0b461f4a4d40e" exitCode=143 Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.583045 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f59c3678-cb58-4462-9ef6-7d91911117ee","Type":"ContainerDied","Data":"86b3dfd43dbd66f7b02cf8515b1bff02bbc8ae27511e132ec3c0b461f4a4d40e"} Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.595217 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-operator-scripts\") pod \"nova-api-db-create-xzg2r\" (UID: \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\") " pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.595355 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whqss\" (UniqueName: \"kubernetes.io/projected/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-kube-api-access-whqss\") pod \"nova-api-db-create-xzg2r\" (UID: \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\") " pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.629567 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6103-account-create-update-5v27b"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.673917 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-jjfmr"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.676430 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.702594 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6b2l\" (UniqueName: \"kubernetes.io/projected/43c3704a-dd9e-4512-858a-e7de0883d025-kube-api-access-s6b2l\") pod \"nova-api-6103-account-create-update-5v27b\" (UID: \"43c3704a-dd9e-4512-858a-e7de0883d025\") " pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.702772 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43c3704a-dd9e-4512-858a-e7de0883d025-operator-scripts\") pod \"nova-api-6103-account-create-update-5v27b\" (UID: \"43c3704a-dd9e-4512-858a-e7de0883d025\") " pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.702881 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18dbb15b-b948-436e-8bf0-3800d84f58a3-operator-scripts\") pod \"nova-cell0-db-create-xbg2v\" (UID: \"18dbb15b-b948-436e-8bf0-3800d84f58a3\") " pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.702921 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-operator-scripts\") pod \"nova-api-db-create-xzg2r\" (UID: \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\") " pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.703005 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww25f\" (UniqueName: \"kubernetes.io/projected/18dbb15b-b948-436e-8bf0-3800d84f58a3-kube-api-access-ww25f\") pod \"nova-cell0-db-create-xbg2v\" (UID: \"18dbb15b-b948-436e-8bf0-3800d84f58a3\") " pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.703047 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whqss\" (UniqueName: \"kubernetes.io/projected/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-kube-api-access-whqss\") pod \"nova-api-db-create-xzg2r\" (UID: \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\") " pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.704449 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-operator-scripts\") pod \"nova-api-db-create-xzg2r\" (UID: \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\") " pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.771599 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whqss\" (UniqueName: \"kubernetes.io/projected/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-kube-api-access-whqss\") pod \"nova-api-db-create-xzg2r\" (UID: \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\") " pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.795939 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.805501 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18dbb15b-b948-436e-8bf0-3800d84f58a3-operator-scripts\") pod \"nova-cell0-db-create-xbg2v\" (UID: \"18dbb15b-b948-436e-8bf0-3800d84f58a3\") " pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.805584 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75hv4\" (UniqueName: \"kubernetes.io/projected/63e58811-7bf9-4bba-813d-d6267295e4da-kube-api-access-75hv4\") pod \"nova-cell1-db-create-jjfmr\" (UID: \"63e58811-7bf9-4bba-813d-d6267295e4da\") " pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.805631 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww25f\" (UniqueName: \"kubernetes.io/projected/18dbb15b-b948-436e-8bf0-3800d84f58a3-kube-api-access-ww25f\") pod \"nova-cell0-db-create-xbg2v\" (UID: \"18dbb15b-b948-436e-8bf0-3800d84f58a3\") " pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.805670 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6b2l\" (UniqueName: \"kubernetes.io/projected/43c3704a-dd9e-4512-858a-e7de0883d025-kube-api-access-s6b2l\") pod \"nova-api-6103-account-create-update-5v27b\" (UID: \"43c3704a-dd9e-4512-858a-e7de0883d025\") " pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.805722 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43c3704a-dd9e-4512-858a-e7de0883d025-operator-scripts\") pod \"nova-api-6103-account-create-update-5v27b\" (UID: \"43c3704a-dd9e-4512-858a-e7de0883d025\") " pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.805794 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63e58811-7bf9-4bba-813d-d6267295e4da-operator-scripts\") pod \"nova-cell1-db-create-jjfmr\" (UID: \"63e58811-7bf9-4bba-813d-d6267295e4da\") " pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.806555 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18dbb15b-b948-436e-8bf0-3800d84f58a3-operator-scripts\") pod \"nova-cell0-db-create-xbg2v\" (UID: \"18dbb15b-b948-436e-8bf0-3800d84f58a3\") " pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.807357 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43c3704a-dd9e-4512-858a-e7de0883d025-operator-scripts\") pod \"nova-api-6103-account-create-update-5v27b\" (UID: \"43c3704a-dd9e-4512-858a-e7de0883d025\") " pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.840098 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jjfmr"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.845573 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww25f\" (UniqueName: \"kubernetes.io/projected/18dbb15b-b948-436e-8bf0-3800d84f58a3-kube-api-access-ww25f\") pod \"nova-cell0-db-create-xbg2v\" (UID: \"18dbb15b-b948-436e-8bf0-3800d84f58a3\") " pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.852405 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6b2l\" (UniqueName: \"kubernetes.io/projected/43c3704a-dd9e-4512-858a-e7de0883d025-kube-api-access-s6b2l\") pod \"nova-api-6103-account-create-update-5v27b\" (UID: \"43c3704a-dd9e-4512-858a-e7de0883d025\") " pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.879038 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e303-account-create-update-s2h9m"] Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.880441 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.888236 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.908006 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63e58811-7bf9-4bba-813d-d6267295e4da-operator-scripts\") pod \"nova-cell1-db-create-jjfmr\" (UID: \"63e58811-7bf9-4bba-813d-d6267295e4da\") " pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.908082 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75hv4\" (UniqueName: \"kubernetes.io/projected/63e58811-7bf9-4bba-813d-d6267295e4da-kube-api-access-75hv4\") pod \"nova-cell1-db-create-jjfmr\" (UID: \"63e58811-7bf9-4bba-813d-d6267295e4da\") " pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:07 crc kubenswrapper[4710]: I1128 17:20:07.909086 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63e58811-7bf9-4bba-813d-d6267295e4da-operator-scripts\") pod \"nova-cell1-db-create-jjfmr\" (UID: \"63e58811-7bf9-4bba-813d-d6267295e4da\") " pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:07.944417 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75hv4\" (UniqueName: \"kubernetes.io/projected/63e58811-7bf9-4bba-813d-d6267295e4da-kube-api-access-75hv4\") pod \"nova-cell1-db-create-jjfmr\" (UID: \"63e58811-7bf9-4bba-813d-d6267295e4da\") " pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:07.953548 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e303-account-create-update-s2h9m"] Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:07.987098 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0b59-account-create-update-k7gbg"] Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:07.988896 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:07.993166 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.009643 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/260df611-7b77-4b0d-b58a-beae48fe7e46-operator-scripts\") pod \"nova-cell0-e303-account-create-update-s2h9m\" (UID: \"260df611-7b77-4b0d-b58a-beae48fe7e46\") " pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.009840 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzwr4\" (UniqueName: \"kubernetes.io/projected/260df611-7b77-4b0d-b58a-beae48fe7e46-kube-api-access-hzwr4\") pod \"nova-cell0-e303-account-create-update-s2h9m\" (UID: \"260df611-7b77-4b0d-b58a-beae48fe7e46\") " pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.035700 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0b59-account-create-update-k7gbg"] Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.094647 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.107157 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.116354 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/260df611-7b77-4b0d-b58a-beae48fe7e46-operator-scripts\") pod \"nova-cell0-e303-account-create-update-s2h9m\" (UID: \"260df611-7b77-4b0d-b58a-beae48fe7e46\") " pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.116404 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-724lc\" (UniqueName: \"kubernetes.io/projected/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-kube-api-access-724lc\") pod \"nova-cell1-0b59-account-create-update-k7gbg\" (UID: \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\") " pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.116489 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-operator-scripts\") pod \"nova-cell1-0b59-account-create-update-k7gbg\" (UID: \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\") " pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.116550 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzwr4\" (UniqueName: \"kubernetes.io/projected/260df611-7b77-4b0d-b58a-beae48fe7e46-kube-api-access-hzwr4\") pod \"nova-cell0-e303-account-create-update-s2h9m\" (UID: \"260df611-7b77-4b0d-b58a-beae48fe7e46\") " pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.117949 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/260df611-7b77-4b0d-b58a-beae48fe7e46-operator-scripts\") pod \"nova-cell0-e303-account-create-update-s2h9m\" (UID: \"260df611-7b77-4b0d-b58a-beae48fe7e46\") " pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.156522 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzwr4\" (UniqueName: \"kubernetes.io/projected/260df611-7b77-4b0d-b58a-beae48fe7e46-kube-api-access-hzwr4\") pod \"nova-cell0-e303-account-create-update-s2h9m\" (UID: \"260df611-7b77-4b0d-b58a-beae48fe7e46\") " pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.168321 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.220610 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.220693 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4m6g\" (UniqueName: \"kubernetes.io/projected/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-kube-api-access-f4m6g\") pod \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.220746 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-scripts\") pod \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.220806 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-config-data\") pod \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.220887 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-httpd-run\") pod \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.220945 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-public-tls-certs\") pod \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.220983 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-logs\") pod \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.221021 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-combined-ca-bundle\") pod \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\" (UID: \"661e3628-1a58-4dda-8cb6-c07c13c5b7f3\") " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.221331 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-operator-scripts\") pod \"nova-cell1-0b59-account-create-update-k7gbg\" (UID: \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\") " pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.221594 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-724lc\" (UniqueName: \"kubernetes.io/projected/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-kube-api-access-724lc\") pod \"nova-cell1-0b59-account-create-update-k7gbg\" (UID: \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\") " pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.222480 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-logs" (OuterVolumeSpecName: "logs") pod "661e3628-1a58-4dda-8cb6-c07c13c5b7f3" (UID: "661e3628-1a58-4dda-8cb6-c07c13c5b7f3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.223133 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-operator-scripts\") pod \"nova-cell1-0b59-account-create-update-k7gbg\" (UID: \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\") " pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.229952 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.239786 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "661e3628-1a58-4dda-8cb6-c07c13c5b7f3" (UID: "661e3628-1a58-4dda-8cb6-c07c13c5b7f3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.239823 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "661e3628-1a58-4dda-8cb6-c07c13c5b7f3" (UID: "661e3628-1a58-4dda-8cb6-c07c13c5b7f3"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.241272 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-kube-api-access-f4m6g" (OuterVolumeSpecName: "kube-api-access-f4m6g") pod "661e3628-1a58-4dda-8cb6-c07c13c5b7f3" (UID: "661e3628-1a58-4dda-8cb6-c07c13c5b7f3"). InnerVolumeSpecName "kube-api-access-f4m6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.241404 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-scripts" (OuterVolumeSpecName: "scripts") pod "661e3628-1a58-4dda-8cb6-c07c13c5b7f3" (UID: "661e3628-1a58-4dda-8cb6-c07c13c5b7f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.248147 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-724lc\" (UniqueName: \"kubernetes.io/projected/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-kube-api-access-724lc\") pod \"nova-cell1-0b59-account-create-update-k7gbg\" (UID: \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\") " pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.323169 4710 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.323201 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.323219 4710 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.323230 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4m6g\" (UniqueName: \"kubernetes.io/projected/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-kube-api-access-f4m6g\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.323241 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.340902 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "661e3628-1a58-4dda-8cb6-c07c13c5b7f3" (UID: "661e3628-1a58-4dda-8cb6-c07c13c5b7f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.406332 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-xzg2r"] Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.419425 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-config-data" (OuterVolumeSpecName: "config-data") pod "661e3628-1a58-4dda-8cb6-c07c13c5b7f3" (UID: "661e3628-1a58-4dda-8cb6-c07c13c5b7f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.422434 4710 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.425282 4710 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.425314 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.425340 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.446925 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "661e3628-1a58-4dda-8cb6-c07c13c5b7f3" (UID: "661e3628-1a58-4dda-8cb6-c07c13c5b7f3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.447339 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.509279 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.528420 4710 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e3628-1a58-4dda-8cb6-c07c13c5b7f3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.611658 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"661e3628-1a58-4dda-8cb6-c07c13c5b7f3","Type":"ContainerDied","Data":"c32bbc3a5d5354599ab40968c7b5b6d6ecbcd154101d1cc67134b4cf7dce52e4"} Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.611720 4710 scope.go:117] "RemoveContainer" containerID="509270d07e24efd376b2c6dcbf5dcc8eb1474d6d025d23f660fc2ddadf42a597" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.611925 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.614010 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xzg2r" event={"ID":"bb564cd2-ed57-49d0-9b9a-a193e5f8418b","Type":"ContainerStarted","Data":"40dfe7f51cd651b806bf886fef92c378669a199a704f50b0fde6165eede50d8c"} Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.621564 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerStarted","Data":"bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66"} Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.685439 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.699383 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.705717 4710 scope.go:117] "RemoveContainer" containerID="629863b32cabb090c5f186c7a3eec3329a75a9a9b11963440dfac8179015b25b" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.777711 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:20:08 crc kubenswrapper[4710]: E1128 17:20:08.778524 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerName="glance-log" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.778545 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerName="glance-log" Nov 28 17:20:08 crc kubenswrapper[4710]: E1128 17:20:08.778577 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerName="glance-httpd" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.778586 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerName="glance-httpd" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.778945 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerName="glance-log" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.778977 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" containerName="glance-httpd" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.780355 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.787889 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.788137 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.800996 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.895182 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xbg2v"] Nov 28 17:20:08 crc kubenswrapper[4710]: W1128 17:20:08.913196 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18dbb15b_b948_436e_8bf0_3800d84f58a3.slice/crio-c9fe9368254b2f1b011fbad4e93e6db00fdb851cee4c9b6b55906bdef3c5ebcc WatchSource:0}: Error finding container c9fe9368254b2f1b011fbad4e93e6db00fdb851cee4c9b6b55906bdef3c5ebcc: Status 404 returned error can't find the container with id c9fe9368254b2f1b011fbad4e93e6db00fdb851cee4c9b6b55906bdef3c5ebcc Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.919352 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-6103-account-create-update-5v27b"] Nov 28 17:20:08 crc kubenswrapper[4710]: W1128 17:20:08.959913 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43c3704a_dd9e_4512_858a_e7de0883d025.slice/crio-4bff12312f5d09016bab1fe92da7999d75cd644cdb6b52719278bfbc69689bf7 WatchSource:0}: Error finding container 4bff12312f5d09016bab1fe92da7999d75cd644cdb6b52719278bfbc69689bf7: Status 404 returned error can't find the container with id 4bff12312f5d09016bab1fe92da7999d75cd644cdb6b52719278bfbc69689bf7 Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.962530 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa610e74-7719-43b5-ae08-ea611158b446-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.962641 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.962696 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.962718 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.962798 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.962853 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8skd8\" (UniqueName: \"kubernetes.io/projected/fa610e74-7719-43b5-ae08-ea611158b446-kube-api-access-8skd8\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.962877 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:08 crc kubenswrapper[4710]: I1128 17:20:08.962893 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa610e74-7719-43b5-ae08-ea611158b446-logs\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.065066 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8skd8\" (UniqueName: \"kubernetes.io/projected/fa610e74-7719-43b5-ae08-ea611158b446-kube-api-access-8skd8\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.066106 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.065163 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.068413 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa610e74-7719-43b5-ae08-ea611158b446-logs\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.068496 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa610e74-7719-43b5-ae08-ea611158b446-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.068676 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.068819 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.068891 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.069061 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.069101 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa610e74-7719-43b5-ae08-ea611158b446-logs\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.076541 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa610e74-7719-43b5-ae08-ea611158b446-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.084241 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.092403 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8skd8\" (UniqueName: \"kubernetes.io/projected/fa610e74-7719-43b5-ae08-ea611158b446-kube-api-access-8skd8\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.097431 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.109723 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-jjfmr"] Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.113022 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.130214 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa610e74-7719-43b5-ae08-ea611158b446-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.176895 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"fa610e74-7719-43b5-ae08-ea611158b446\") " pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.188905 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661e3628-1a58-4dda-8cb6-c07c13c5b7f3" path="/var/lib/kubelet/pods/661e3628-1a58-4dda-8cb6-c07c13c5b7f3/volumes" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.365897 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e303-account-create-update-s2h9m"] Nov 28 17:20:09 crc kubenswrapper[4710]: W1128 17:20:09.372199 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod260df611_7b77_4b0d_b58a_beae48fe7e46.slice/crio-d4985330a57e1022eafb67635123ab269eddbcbc510f19f34c0b74bde8d19a6a WatchSource:0}: Error finding container d4985330a57e1022eafb67635123ab269eddbcbc510f19f34c0b74bde8d19a6a: Status 404 returned error can't find the container with id d4985330a57e1022eafb67635123ab269eddbcbc510f19f34c0b74bde8d19a6a Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.379494 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0b59-account-create-update-k7gbg"] Nov 28 17:20:09 crc kubenswrapper[4710]: W1128 17:20:09.382133 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42b437e4_c7f2_4750_82e0_b75ab9bc0ea0.slice/crio-e43ddd096baff7962b6291173e931bdf946aabd9209134aea53e1f4bd0377b9f WatchSource:0}: Error finding container e43ddd096baff7962b6291173e931bdf946aabd9209134aea53e1f4bd0377b9f: Status 404 returned error can't find the container with id e43ddd096baff7962b6291173e931bdf946aabd9209134aea53e1f4bd0377b9f Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.423318 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.697151 4710 generic.go:334] "Generic (PLEG): container finished" podID="bb564cd2-ed57-49d0-9b9a-a193e5f8418b" containerID="ab51d1d5e5730b440bab928f2a8f2db91c8453c23d53e7a8682e2ca7b518f146" exitCode=0 Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.697220 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xzg2r" event={"ID":"bb564cd2-ed57-49d0-9b9a-a193e5f8418b","Type":"ContainerDied","Data":"ab51d1d5e5730b440bab928f2a8f2db91c8453c23d53e7a8682e2ca7b518f146"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.726239 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerStarted","Data":"656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.731386 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" event={"ID":"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0","Type":"ContainerStarted","Data":"e43ddd096baff7962b6291173e931bdf946aabd9209134aea53e1f4bd0377b9f"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.738568 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xbg2v" event={"ID":"18dbb15b-b948-436e-8bf0-3800d84f58a3","Type":"ContainerStarted","Data":"86add492a92cc8d990205416a46151ef23eab33cbcca734c16ee56aa8e501119"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.738605 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xbg2v" event={"ID":"18dbb15b-b948-436e-8bf0-3800d84f58a3","Type":"ContainerStarted","Data":"c9fe9368254b2f1b011fbad4e93e6db00fdb851cee4c9b6b55906bdef3c5ebcc"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.743779 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jjfmr" event={"ID":"63e58811-7bf9-4bba-813d-d6267295e4da","Type":"ContainerStarted","Data":"10bdb3058bd37df4b9dfda52dfdc6b8f7b88074755c706636353da3539b356b2"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.743826 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jjfmr" event={"ID":"63e58811-7bf9-4bba-813d-d6267295e4da","Type":"ContainerStarted","Data":"bc50af149c8c3d960c0745720d023ed25127434dbf119ea529cae6ef9d2daaec"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.758862 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6103-account-create-update-5v27b" event={"ID":"43c3704a-dd9e-4512-858a-e7de0883d025","Type":"ContainerStarted","Data":"60dcd3bfbd2b7f73e2d10a4fbf69da0c4cad7c2d91052db0b43bff7a6fe46a66"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.758925 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6103-account-create-update-5v27b" event={"ID":"43c3704a-dd9e-4512-858a-e7de0883d025","Type":"ContainerStarted","Data":"4bff12312f5d09016bab1fe92da7999d75cd644cdb6b52719278bfbc69689bf7"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.784306 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e303-account-create-update-s2h9m" event={"ID":"260df611-7b77-4b0d-b58a-beae48fe7e46","Type":"ContainerStarted","Data":"d4985330a57e1022eafb67635123ab269eddbcbc510f19f34c0b74bde8d19a6a"} Nov 28 17:20:09 crc kubenswrapper[4710]: I1128 17:20:09.834972 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-jjfmr" podStartSLOduration=2.834950507 podStartE2EDuration="2.834950507s" podCreationTimestamp="2025-11-28 17:20:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:09.808209268 +0000 UTC m=+1299.066509313" watchObservedRunningTime="2025-11-28 17:20:09.834950507 +0000 UTC m=+1299.093250552" Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.178114 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 28 17:20:10 crc kubenswrapper[4710]: W1128 17:20:10.238974 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa610e74_7719_43b5_ae08_ea611158b446.slice/crio-9beaf98f450f2c761f702d0c898965e65380592cea1d4bcfe3e004d4063132a6 WatchSource:0}: Error finding container 9beaf98f450f2c761f702d0c898965e65380592cea1d4bcfe3e004d4063132a6: Status 404 returned error can't find the container with id 9beaf98f450f2c761f702d0c898965e65380592cea1d4bcfe3e004d4063132a6 Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.714717 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.823160 4710 generic.go:334] "Generic (PLEG): container finished" podID="18dbb15b-b948-436e-8bf0-3800d84f58a3" containerID="86add492a92cc8d990205416a46151ef23eab33cbcca734c16ee56aa8e501119" exitCode=0 Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.823234 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xbg2v" event={"ID":"18dbb15b-b948-436e-8bf0-3800d84f58a3","Type":"ContainerDied","Data":"86add492a92cc8d990205416a46151ef23eab33cbcca734c16ee56aa8e501119"} Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.826336 4710 generic.go:334] "Generic (PLEG): container finished" podID="63e58811-7bf9-4bba-813d-d6267295e4da" containerID="10bdb3058bd37df4b9dfda52dfdc6b8f7b88074755c706636353da3539b356b2" exitCode=0 Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.826395 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jjfmr" event={"ID":"63e58811-7bf9-4bba-813d-d6267295e4da","Type":"ContainerDied","Data":"10bdb3058bd37df4b9dfda52dfdc6b8f7b88074755c706636353da3539b356b2"} Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.829390 4710 generic.go:334] "Generic (PLEG): container finished" podID="43c3704a-dd9e-4512-858a-e7de0883d025" containerID="60dcd3bfbd2b7f73e2d10a4fbf69da0c4cad7c2d91052db0b43bff7a6fe46a66" exitCode=0 Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.829433 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6103-account-create-update-5v27b" event={"ID":"43c3704a-dd9e-4512-858a-e7de0883d025","Type":"ContainerDied","Data":"60dcd3bfbd2b7f73e2d10a4fbf69da0c4cad7c2d91052db0b43bff7a6fe46a66"} Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.832719 4710 generic.go:334] "Generic (PLEG): container finished" podID="260df611-7b77-4b0d-b58a-beae48fe7e46" containerID="8bfd49f29c81fba223c6522d619b1404f21ba362b66c14b4f8c737baf938f6ac" exitCode=0 Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.832855 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e303-account-create-update-s2h9m" event={"ID":"260df611-7b77-4b0d-b58a-beae48fe7e46","Type":"ContainerDied","Data":"8bfd49f29c81fba223c6522d619b1404f21ba362b66c14b4f8c737baf938f6ac"} Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.834445 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa610e74-7719-43b5-ae08-ea611158b446","Type":"ContainerStarted","Data":"9beaf98f450f2c761f702d0c898965e65380592cea1d4bcfe3e004d4063132a6"} Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.837086 4710 generic.go:334] "Generic (PLEG): container finished" podID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerID="25cd53bf119f2d67e1e659a9f155f09ff66968f2331d5606757803945df5375c" exitCode=0 Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.837134 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f59c3678-cb58-4462-9ef6-7d91911117ee","Type":"ContainerDied","Data":"25cd53bf119f2d67e1e659a9f155f09ff66968f2331d5606757803945df5375c"} Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.842594 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerStarted","Data":"4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20"} Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.849166 4710 generic.go:334] "Generic (PLEG): container finished" podID="42b437e4-c7f2-4750-82e0-b75ab9bc0ea0" containerID="8243f35943325f6a4d70ea6d32ccc7d37b0040c4f024b39e8fbac33b2331aa36" exitCode=0 Nov 28 17:20:10 crc kubenswrapper[4710]: I1128 17:20:10.849229 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" event={"ID":"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0","Type":"ContainerDied","Data":"8243f35943325f6a4d70ea6d32ccc7d37b0040c4f024b39e8fbac33b2331aa36"} Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.344388 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.349373 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6459d5bc5f-vhnpr" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.481564 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.679352 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43c3704a-dd9e-4512-858a-e7de0883d025-operator-scripts\") pod \"43c3704a-dd9e-4512-858a-e7de0883d025\" (UID: \"43c3704a-dd9e-4512-858a-e7de0883d025\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.679978 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6b2l\" (UniqueName: \"kubernetes.io/projected/43c3704a-dd9e-4512-858a-e7de0883d025-kube-api-access-s6b2l\") pod \"43c3704a-dd9e-4512-858a-e7de0883d025\" (UID: \"43c3704a-dd9e-4512-858a-e7de0883d025\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.680514 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43c3704a-dd9e-4512-858a-e7de0883d025-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43c3704a-dd9e-4512-858a-e7de0883d025" (UID: "43c3704a-dd9e-4512-858a-e7de0883d025"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.689931 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c3704a-dd9e-4512-858a-e7de0883d025-kube-api-access-s6b2l" (OuterVolumeSpecName: "kube-api-access-s6b2l") pod "43c3704a-dd9e-4512-858a-e7de0883d025" (UID: "43c3704a-dd9e-4512-858a-e7de0883d025"). InnerVolumeSpecName "kube-api-access-s6b2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.782975 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43c3704a-dd9e-4512-858a-e7de0883d025-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.783019 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6b2l\" (UniqueName: \"kubernetes.io/projected/43c3704a-dd9e-4512-858a-e7de0883d025-kube-api-access-s6b2l\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.812534 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.819745 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.858027 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884291 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww25f\" (UniqueName: \"kubernetes.io/projected/18dbb15b-b948-436e-8bf0-3800d84f58a3-kube-api-access-ww25f\") pod \"18dbb15b-b948-436e-8bf0-3800d84f58a3\" (UID: \"18dbb15b-b948-436e-8bf0-3800d84f58a3\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884328 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-scripts\") pod \"f59c3678-cb58-4462-9ef6-7d91911117ee\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884347 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-combined-ca-bundle\") pod \"f59c3678-cb58-4462-9ef6-7d91911117ee\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884381 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfgkz\" (UniqueName: \"kubernetes.io/projected/f59c3678-cb58-4462-9ef6-7d91911117ee-kube-api-access-mfgkz\") pod \"f59c3678-cb58-4462-9ef6-7d91911117ee\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884408 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-config-data\") pod \"f59c3678-cb58-4462-9ef6-7d91911117ee\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884445 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-httpd-run\") pod \"f59c3678-cb58-4462-9ef6-7d91911117ee\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884489 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18dbb15b-b948-436e-8bf0-3800d84f58a3-operator-scripts\") pod \"18dbb15b-b948-436e-8bf0-3800d84f58a3\" (UID: \"18dbb15b-b948-436e-8bf0-3800d84f58a3\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884521 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"f59c3678-cb58-4462-9ef6-7d91911117ee\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884558 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-operator-scripts\") pod \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\" (UID: \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884629 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-internal-tls-certs\") pod \"f59c3678-cb58-4462-9ef6-7d91911117ee\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884651 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-logs\") pod \"f59c3678-cb58-4462-9ef6-7d91911117ee\" (UID: \"f59c3678-cb58-4462-9ef6-7d91911117ee\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.884678 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whqss\" (UniqueName: \"kubernetes.io/projected/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-kube-api-access-whqss\") pod \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\" (UID: \"bb564cd2-ed57-49d0-9b9a-a193e5f8418b\") " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.888073 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f59c3678-cb58-4462-9ef6-7d91911117ee" (UID: "f59c3678-cb58-4462-9ef6-7d91911117ee"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.890209 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa610e74-7719-43b5-ae08-ea611158b446","Type":"ContainerStarted","Data":"44b7cac3f765b66c18e52575ea00eb54a79024eaef37196fa9802e13cd764380"} Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.890722 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bb564cd2-ed57-49d0-9b9a-a193e5f8418b" (UID: "bb564cd2-ed57-49d0-9b9a-a193e5f8418b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.891120 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18dbb15b-b948-436e-8bf0-3800d84f58a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18dbb15b-b948-436e-8bf0-3800d84f58a3" (UID: "18dbb15b-b948-436e-8bf0-3800d84f58a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.895310 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "f59c3678-cb58-4462-9ef6-7d91911117ee" (UID: "f59c3678-cb58-4462-9ef6-7d91911117ee"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.897648 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-kube-api-access-whqss" (OuterVolumeSpecName: "kube-api-access-whqss") pod "bb564cd2-ed57-49d0-9b9a-a193e5f8418b" (UID: "bb564cd2-ed57-49d0-9b9a-a193e5f8418b"). InnerVolumeSpecName "kube-api-access-whqss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.897958 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-logs" (OuterVolumeSpecName: "logs") pod "f59c3678-cb58-4462-9ef6-7d91911117ee" (UID: "f59c3678-cb58-4462-9ef6-7d91911117ee"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.902848 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.902994 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f59c3678-cb58-4462-9ef6-7d91911117ee","Type":"ContainerDied","Data":"dc0082634e8ee9745f839750fd75a3afb92e1babe0fae946886b75a141d06e14"} Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.903091 4710 scope.go:117] "RemoveContainer" containerID="25cd53bf119f2d67e1e659a9f155f09ff66968f2331d5606757803945df5375c" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.909687 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-xzg2r" event={"ID":"bb564cd2-ed57-49d0-9b9a-a193e5f8418b","Type":"ContainerDied","Data":"40dfe7f51cd651b806bf886fef92c378669a199a704f50b0fde6165eede50d8c"} Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.909930 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40dfe7f51cd651b806bf886fef92c378669a199a704f50b0fde6165eede50d8c" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.910811 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-xzg2r" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.913544 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f59c3678-cb58-4462-9ef6-7d91911117ee-kube-api-access-mfgkz" (OuterVolumeSpecName: "kube-api-access-mfgkz") pod "f59c3678-cb58-4462-9ef6-7d91911117ee" (UID: "f59c3678-cb58-4462-9ef6-7d91911117ee"). InnerVolumeSpecName "kube-api-access-mfgkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.925146 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xbg2v" event={"ID":"18dbb15b-b948-436e-8bf0-3800d84f58a3","Type":"ContainerDied","Data":"c9fe9368254b2f1b011fbad4e93e6db00fdb851cee4c9b6b55906bdef3c5ebcc"} Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.925196 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9fe9368254b2f1b011fbad4e93e6db00fdb851cee4c9b6b55906bdef3c5ebcc" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.925171 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xbg2v" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.941538 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-6103-account-create-update-5v27b" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.943520 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-6103-account-create-update-5v27b" event={"ID":"43c3704a-dd9e-4512-858a-e7de0883d025","Type":"ContainerDied","Data":"4bff12312f5d09016bab1fe92da7999d75cd644cdb6b52719278bfbc69689bf7"} Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.943569 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bff12312f5d09016bab1fe92da7999d75cd644cdb6b52719278bfbc69689bf7" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.943646 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-scripts" (OuterVolumeSpecName: "scripts") pod "f59c3678-cb58-4462-9ef6-7d91911117ee" (UID: "f59c3678-cb58-4462-9ef6-7d91911117ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.964830 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dbb15b-b948-436e-8bf0-3800d84f58a3-kube-api-access-ww25f" (OuterVolumeSpecName: "kube-api-access-ww25f") pod "18dbb15b-b948-436e-8bf0-3800d84f58a3" (UID: "18dbb15b-b948-436e-8bf0-3800d84f58a3"). InnerVolumeSpecName "kube-api-access-ww25f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.972068 4710 scope.go:117] "RemoveContainer" containerID="86b3dfd43dbd66f7b02cf8515b1bff02bbc8ae27511e132ec3c0b461f4a4d40e" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.988938 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18dbb15b-b948-436e-8bf0-3800d84f58a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.988988 4710 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.989002 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.989014 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.989029 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whqss\" (UniqueName: \"kubernetes.io/projected/bb564cd2-ed57-49d0-9b9a-a193e5f8418b-kube-api-access-whqss\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.989040 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww25f\" (UniqueName: \"kubernetes.io/projected/18dbb15b-b948-436e-8bf0-3800d84f58a3-kube-api-access-ww25f\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.989053 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.989064 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfgkz\" (UniqueName: \"kubernetes.io/projected/f59c3678-cb58-4462-9ef6-7d91911117ee-kube-api-access-mfgkz\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:11 crc kubenswrapper[4710]: I1128 17:20:11.989073 4710 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f59c3678-cb58-4462-9ef6-7d91911117ee-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.312660 4710 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.329917 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f59c3678-cb58-4462-9ef6-7d91911117ee" (UID: "f59c3678-cb58-4462-9ef6-7d91911117ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.403915 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.404433 4710 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.410415 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-config-data" (OuterVolumeSpecName: "config-data") pod "f59c3678-cb58-4462-9ef6-7d91911117ee" (UID: "f59c3678-cb58-4462-9ef6-7d91911117ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.413193 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f59c3678-cb58-4462-9ef6-7d91911117ee" (UID: "f59c3678-cb58-4462-9ef6-7d91911117ee"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.507498 4710 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.507525 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f59c3678-cb58-4462-9ef6-7d91911117ee-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.526003 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.543328 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.553257 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.597396 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:20:12 crc kubenswrapper[4710]: E1128 17:20:12.597838 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb564cd2-ed57-49d0-9b9a-a193e5f8418b" containerName="mariadb-database-create" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.597851 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb564cd2-ed57-49d0-9b9a-a193e5f8418b" containerName="mariadb-database-create" Nov 28 17:20:12 crc kubenswrapper[4710]: E1128 17:20:12.597867 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerName="glance-httpd" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.597874 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerName="glance-httpd" Nov 28 17:20:12 crc kubenswrapper[4710]: E1128 17:20:12.598054 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260df611-7b77-4b0d-b58a-beae48fe7e46" containerName="mariadb-account-create-update" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.598064 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="260df611-7b77-4b0d-b58a-beae48fe7e46" containerName="mariadb-account-create-update" Nov 28 17:20:12 crc kubenswrapper[4710]: E1128 17:20:12.598708 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dbb15b-b948-436e-8bf0-3800d84f58a3" containerName="mariadb-database-create" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.598720 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dbb15b-b948-436e-8bf0-3800d84f58a3" containerName="mariadb-database-create" Nov 28 17:20:12 crc kubenswrapper[4710]: E1128 17:20:12.598733 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerName="glance-log" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.598739 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerName="glance-log" Nov 28 17:20:12 crc kubenswrapper[4710]: E1128 17:20:12.598772 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c3704a-dd9e-4512-858a-e7de0883d025" containerName="mariadb-account-create-update" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.598780 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c3704a-dd9e-4512-858a-e7de0883d025" containerName="mariadb-account-create-update" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.599054 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="18dbb15b-b948-436e-8bf0-3800d84f58a3" containerName="mariadb-database-create" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.599077 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerName="glance-log" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.599089 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb564cd2-ed57-49d0-9b9a-a193e5f8418b" containerName="mariadb-database-create" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.599100 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c3704a-dd9e-4512-858a-e7de0883d025" containerName="mariadb-account-create-update" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.599116 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="260df611-7b77-4b0d-b58a-beae48fe7e46" containerName="mariadb-account-create-update" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.599125 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" containerName="glance-httpd" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.600386 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.607657 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.607865 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.661611 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.710490 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/260df611-7b77-4b0d-b58a-beae48fe7e46-operator-scripts\") pod \"260df611-7b77-4b0d-b58a-beae48fe7e46\" (UID: \"260df611-7b77-4b0d-b58a-beae48fe7e46\") " Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.711132 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzwr4\" (UniqueName: \"kubernetes.io/projected/260df611-7b77-4b0d-b58a-beae48fe7e46-kube-api-access-hzwr4\") pod \"260df611-7b77-4b0d-b58a-beae48fe7e46\" (UID: \"260df611-7b77-4b0d-b58a-beae48fe7e46\") " Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.711601 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.711739 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.712038 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.712170 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hg9b\" (UniqueName: \"kubernetes.io/projected/361b0d95-8489-4799-bc9b-a6232aee65d3-kube-api-access-2hg9b\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.712292 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.712423 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.712539 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/361b0d95-8489-4799-bc9b-a6232aee65d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.712687 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/361b0d95-8489-4799-bc9b-a6232aee65d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.713861 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260df611-7b77-4b0d-b58a-beae48fe7e46-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "260df611-7b77-4b0d-b58a-beae48fe7e46" (UID: "260df611-7b77-4b0d-b58a-beae48fe7e46"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.716063 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260df611-7b77-4b0d-b58a-beae48fe7e46-kube-api-access-hzwr4" (OuterVolumeSpecName: "kube-api-access-hzwr4") pod "260df611-7b77-4b0d-b58a-beae48fe7e46" (UID: "260df611-7b77-4b0d-b58a-beae48fe7e46"). InnerVolumeSpecName "kube-api-access-hzwr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.744040 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.800107 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.818677 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hg9b\" (UniqueName: \"kubernetes.io/projected/361b0d95-8489-4799-bc9b-a6232aee65d3-kube-api-access-2hg9b\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.818766 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.818874 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.818903 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/361b0d95-8489-4799-bc9b-a6232aee65d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.818991 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/361b0d95-8489-4799-bc9b-a6232aee65d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.819017 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.819042 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.819272 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.819371 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzwr4\" (UniqueName: \"kubernetes.io/projected/260df611-7b77-4b0d-b58a-beae48fe7e46-kube-api-access-hzwr4\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.819388 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/260df611-7b77-4b0d-b58a-beae48fe7e46-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.820110 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/361b0d95-8489-4799-bc9b-a6232aee65d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.821246 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.821784 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/361b0d95-8489-4799-bc9b-a6232aee65d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.826142 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.842526 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hg9b\" (UniqueName: \"kubernetes.io/projected/361b0d95-8489-4799-bc9b-a6232aee65d3-kube-api-access-2hg9b\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.860941 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.862097 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.866497 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.868244 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/361b0d95-8489-4799-bc9b-a6232aee65d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"361b0d95-8489-4799-bc9b-a6232aee65d3\") " pod="openstack/glance-default-internal-api-0" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.923701 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-724lc\" (UniqueName: \"kubernetes.io/projected/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-kube-api-access-724lc\") pod \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\" (UID: \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\") " Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.923830 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75hv4\" (UniqueName: \"kubernetes.io/projected/63e58811-7bf9-4bba-813d-d6267295e4da-kube-api-access-75hv4\") pod \"63e58811-7bf9-4bba-813d-d6267295e4da\" (UID: \"63e58811-7bf9-4bba-813d-d6267295e4da\") " Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.923952 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63e58811-7bf9-4bba-813d-d6267295e4da-operator-scripts\") pod \"63e58811-7bf9-4bba-813d-d6267295e4da\" (UID: \"63e58811-7bf9-4bba-813d-d6267295e4da\") " Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.924023 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-operator-scripts\") pod \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\" (UID: \"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0\") " Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.928196 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42b437e4-c7f2-4750-82e0-b75ab9bc0ea0" (UID: "42b437e4-c7f2-4750-82e0-b75ab9bc0ea0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.934185 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-kube-api-access-724lc" (OuterVolumeSpecName: "kube-api-access-724lc") pod "42b437e4-c7f2-4750-82e0-b75ab9bc0ea0" (UID: "42b437e4-c7f2-4750-82e0-b75ab9bc0ea0"). InnerVolumeSpecName "kube-api-access-724lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:12 crc kubenswrapper[4710]: I1128 17:20:12.945643 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e58811-7bf9-4bba-813d-d6267295e4da-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63e58811-7bf9-4bba-813d-d6267295e4da" (UID: "63e58811-7bf9-4bba-813d-d6267295e4da"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.000081 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e58811-7bf9-4bba-813d-d6267295e4da-kube-api-access-75hv4" (OuterVolumeSpecName: "kube-api-access-75hv4") pod "63e58811-7bf9-4bba-813d-d6267295e4da" (UID: "63e58811-7bf9-4bba-813d-d6267295e4da"). InnerVolumeSpecName "kube-api-access-75hv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.026033 4710 generic.go:334] "Generic (PLEG): container finished" podID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerID="643e6ab79e908290f4b7feca23692019d91eeb9fb5cf9d88eb79e505e8bfdfda" exitCode=137 Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.026132 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87a9c794-98dd-4e4c-bd00-9c887d614b1a","Type":"ContainerDied","Data":"643e6ab79e908290f4b7feca23692019d91eeb9fb5cf9d88eb79e505e8bfdfda"} Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.028288 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-jjfmr" event={"ID":"63e58811-7bf9-4bba-813d-d6267295e4da","Type":"ContainerDied","Data":"bc50af149c8c3d960c0745720d023ed25127434dbf119ea529cae6ef9d2daaec"} Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.028356 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc50af149c8c3d960c0745720d023ed25127434dbf119ea529cae6ef9d2daaec" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.028319 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-jjfmr" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.032207 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e303-account-create-update-s2h9m" event={"ID":"260df611-7b77-4b0d-b58a-beae48fe7e46","Type":"ContainerDied","Data":"d4985330a57e1022eafb67635123ab269eddbcbc510f19f34c0b74bde8d19a6a"} Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.032252 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4985330a57e1022eafb67635123ab269eddbcbc510f19f34c0b74bde8d19a6a" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.032357 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e303-account-create-update-s2h9m" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.060637 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa610e74-7719-43b5-ae08-ea611158b446","Type":"ContainerStarted","Data":"c54d373a1d90948b24f67b2f22a19e74ecdd1bb6f45b958aa026fbe0ce2bffae"} Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.062578 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63e58811-7bf9-4bba-813d-d6267295e4da-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.068842 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerStarted","Data":"42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49"} Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.069076 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.069128 4710 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.069099 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.069549 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="ceilometer-central-agent" containerID="cri-o://bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66" gracePeriod=30 Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.073229 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="proxy-httpd" containerID="cri-o://42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49" gracePeriod=30 Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.073363 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="sg-core" containerID="cri-o://4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20" gracePeriod=30 Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.073420 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="ceilometer-notification-agent" containerID="cri-o://656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6" gracePeriod=30 Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.075357 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-724lc\" (UniqueName: \"kubernetes.io/projected/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0-kube-api-access-724lc\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.082052 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75hv4\" (UniqueName: \"kubernetes.io/projected/63e58811-7bf9-4bba-813d-d6267295e4da-kube-api-access-75hv4\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.089880 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" event={"ID":"42b437e4-c7f2-4750-82e0-b75ab9bc0ea0","Type":"ContainerDied","Data":"e43ddd096baff7962b6291173e931bdf946aabd9209134aea53e1f4bd0377b9f"} Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.089939 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e43ddd096baff7962b6291173e931bdf946aabd9209134aea53e1f4bd0377b9f" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.090046 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0b59-account-create-update-k7gbg" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.131880 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.131865791 podStartE2EDuration="5.131865791s" podCreationTimestamp="2025-11-28 17:20:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:13.130796089 +0000 UTC m=+1302.389096134" watchObservedRunningTime="2025-11-28 17:20:13.131865791 +0000 UTC m=+1302.390165836" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.147274 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.184400 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a9c794-98dd-4e4c-bd00-9c887d614b1a-logs\") pod \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.184453 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-scripts\") pod \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.184530 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwshf\" (UniqueName: \"kubernetes.io/projected/87a9c794-98dd-4e4c-bd00-9c887d614b1a-kube-api-access-kwshf\") pod \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.184589 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data\") pod \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.184616 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-combined-ca-bundle\") pod \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.184721 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87a9c794-98dd-4e4c-bd00-9c887d614b1a-etc-machine-id\") pod \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.184805 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data-custom\") pod \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\" (UID: \"87a9c794-98dd-4e4c-bd00-9c887d614b1a\") " Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.187250 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87a9c794-98dd-4e4c-bd00-9c887d614b1a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "87a9c794-98dd-4e4c-bd00-9c887d614b1a" (UID: "87a9c794-98dd-4e4c-bd00-9c887d614b1a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.188403 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87a9c794-98dd-4e4c-bd00-9c887d614b1a-logs" (OuterVolumeSpecName: "logs") pod "87a9c794-98dd-4e4c-bd00-9c887d614b1a" (UID: "87a9c794-98dd-4e4c-bd00-9c887d614b1a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.201582 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87a9c794-98dd-4e4c-bd00-9c887d614b1a" (UID: "87a9c794-98dd-4e4c-bd00-9c887d614b1a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.201700 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-scripts" (OuterVolumeSpecName: "scripts") pod "87a9c794-98dd-4e4c-bd00-9c887d614b1a" (UID: "87a9c794-98dd-4e4c-bd00-9c887d614b1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.202035 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f59c3678-cb58-4462-9ef6-7d91911117ee" path="/var/lib/kubelet/pods/f59c3678-cb58-4462-9ef6-7d91911117ee/volumes" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.240842 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a9c794-98dd-4e4c-bd00-9c887d614b1a-kube-api-access-kwshf" (OuterVolumeSpecName: "kube-api-access-kwshf") pod "87a9c794-98dd-4e4c-bd00-9c887d614b1a" (UID: "87a9c794-98dd-4e4c-bd00-9c887d614b1a"). InnerVolumeSpecName "kube-api-access-kwshf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.242212 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.602453052 podStartE2EDuration="9.242184243s" podCreationTimestamp="2025-11-28 17:20:04 +0000 UTC" firstStartedPulling="2025-11-28 17:20:05.869911096 +0000 UTC m=+1295.128211151" lastFinishedPulling="2025-11-28 17:20:11.509642297 +0000 UTC m=+1300.767942342" observedRunningTime="2025-11-28 17:20:13.186817621 +0000 UTC m=+1302.445117666" watchObservedRunningTime="2025-11-28 17:20:13.242184243 +0000 UTC m=+1302.500484288" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.260542 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87a9c794-98dd-4e4c-bd00-9c887d614b1a" (UID: "87a9c794-98dd-4e4c-bd00-9c887d614b1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.284823 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data" (OuterVolumeSpecName: "config-data") pod "87a9c794-98dd-4e4c-bd00-9c887d614b1a" (UID: "87a9c794-98dd-4e4c-bd00-9c887d614b1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.288655 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwshf\" (UniqueName: \"kubernetes.io/projected/87a9c794-98dd-4e4c-bd00-9c887d614b1a-kube-api-access-kwshf\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.288707 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.288721 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.288732 4710 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87a9c794-98dd-4e4c-bd00-9c887d614b1a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.288742 4710 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.288773 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87a9c794-98dd-4e4c-bd00-9c887d614b1a-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.288786 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87a9c794-98dd-4e4c-bd00-9c887d614b1a-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:13 crc kubenswrapper[4710]: E1128 17:20:13.559225 4710 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63e58811_7bf9_4bba_813d_d6267295e4da.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63e58811_7bf9_4bba_813d_d6267295e4da.slice/crio-bc50af149c8c3d960c0745720d023ed25127434dbf119ea529cae6ef9d2daaec\": RecentStats: unable to find data in memory cache]" Nov 28 17:20:13 crc kubenswrapper[4710]: I1128 17:20:13.909266 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.101187 4710 generic.go:334] "Generic (PLEG): container finished" podID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerID="42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49" exitCode=0 Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.101214 4710 generic.go:334] "Generic (PLEG): container finished" podID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerID="4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20" exitCode=2 Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.101221 4710 generic.go:334] "Generic (PLEG): container finished" podID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerID="656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6" exitCode=0 Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.101257 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerDied","Data":"42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49"} Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.101280 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerDied","Data":"4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20"} Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.101291 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerDied","Data":"656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6"} Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.102409 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"361b0d95-8489-4799-bc9b-a6232aee65d3","Type":"ContainerStarted","Data":"c5405ac5954a384fba329f2a7995eae3834ddb9eca11161b60b3c74503456d11"} Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.105221 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87a9c794-98dd-4e4c-bd00-9c887d614b1a","Type":"ContainerDied","Data":"e53fa64301ee21b608367c7a5b8fb185b96695488c374eb76144ad8bc18ec452"} Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.105271 4710 scope.go:117] "RemoveContainer" containerID="643e6ab79e908290f4b7feca23692019d91eeb9fb5cf9d88eb79e505e8bfdfda" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.105276 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.144552 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.150276 4710 scope.go:117] "RemoveContainer" containerID="c09950b46b487f663145d843061c6fb7cb4cc856437d8ec5ce3a130b9a8c4e8c" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.156654 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.174992 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:20:14 crc kubenswrapper[4710]: E1128 17:20:14.175657 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerName="cinder-api-log" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.175727 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerName="cinder-api-log" Nov 28 17:20:14 crc kubenswrapper[4710]: E1128 17:20:14.175849 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e58811-7bf9-4bba-813d-d6267295e4da" containerName="mariadb-database-create" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.175925 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e58811-7bf9-4bba-813d-d6267295e4da" containerName="mariadb-database-create" Nov 28 17:20:14 crc kubenswrapper[4710]: E1128 17:20:14.175996 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerName="cinder-api" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.176054 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerName="cinder-api" Nov 28 17:20:14 crc kubenswrapper[4710]: E1128 17:20:14.176123 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b437e4-c7f2-4750-82e0-b75ab9bc0ea0" containerName="mariadb-account-create-update" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.176188 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b437e4-c7f2-4750-82e0-b75ab9bc0ea0" containerName="mariadb-account-create-update" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.176474 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e58811-7bf9-4bba-813d-d6267295e4da" containerName="mariadb-database-create" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.176550 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerName="cinder-api-log" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.176625 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b437e4-c7f2-4750-82e0-b75ab9bc0ea0" containerName="mariadb-account-create-update" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.176691 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" containerName="cinder-api" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.177883 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.184820 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.184964 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.185079 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.203429 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.312734 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02dda5a0-8c02-4b9e-a122-573bc14ef753-etc-machine-id\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.312818 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-config-data\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.312843 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-public-tls-certs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.312875 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl4d8\" (UniqueName: \"kubernetes.io/projected/02dda5a0-8c02-4b9e-a122-573bc14ef753-kube-api-access-tl4d8\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.312952 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02dda5a0-8c02-4b9e-a122-573bc14ef753-logs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.313032 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.313073 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-scripts\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.313105 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.313129 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-config-data-custom\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415179 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415250 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-scripts\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415283 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415304 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-config-data-custom\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415401 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02dda5a0-8c02-4b9e-a122-573bc14ef753-etc-machine-id\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415435 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-config-data\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415455 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-public-tls-certs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415489 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl4d8\" (UniqueName: \"kubernetes.io/projected/02dda5a0-8c02-4b9e-a122-573bc14ef753-kube-api-access-tl4d8\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415559 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02dda5a0-8c02-4b9e-a122-573bc14ef753-logs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.415874 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/02dda5a0-8c02-4b9e-a122-573bc14ef753-etc-machine-id\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.416096 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02dda5a0-8c02-4b9e-a122-573bc14ef753-logs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.421395 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-public-tls-certs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.422414 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-config-data-custom\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.423284 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-scripts\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.423744 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.436918 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.437591 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl4d8\" (UniqueName: \"kubernetes.io/projected/02dda5a0-8c02-4b9e-a122-573bc14ef753-kube-api-access-tl4d8\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.437947 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02dda5a0-8c02-4b9e-a122-573bc14ef753-config-data\") pod \"cinder-api-0\" (UID: \"02dda5a0-8c02-4b9e-a122-573bc14ef753\") " pod="openstack/cinder-api-0" Nov 28 17:20:14 crc kubenswrapper[4710]: I1128 17:20:14.520172 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 28 17:20:15 crc kubenswrapper[4710]: I1128 17:20:15.039243 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 28 17:20:15 crc kubenswrapper[4710]: I1128 17:20:15.122694 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02dda5a0-8c02-4b9e-a122-573bc14ef753","Type":"ContainerStarted","Data":"626cd0899e36b7d971b5e23b4f21ef1225a29d20b9e5a0722732698a295576ba"} Nov 28 17:20:15 crc kubenswrapper[4710]: I1128 17:20:15.125460 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"361b0d95-8489-4799-bc9b-a6232aee65d3","Type":"ContainerStarted","Data":"2d65693fa79e4f01b2da934e8980e678fa255ee440a35319f986abc10389fcbc"} Nov 28 17:20:15 crc kubenswrapper[4710]: I1128 17:20:15.155407 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a9c794-98dd-4e4c-bd00-9c887d614b1a" path="/var/lib/kubelet/pods/87a9c794-98dd-4e4c-bd00-9c887d614b1a/volumes" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.217279 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02dda5a0-8c02-4b9e-a122-573bc14ef753","Type":"ContainerStarted","Data":"3468bd591576fb3ba8fe8e0dbfdd0b0b6b076ed04a155a6f85eb5b0f7b66b954"} Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.224256 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"361b0d95-8489-4799-bc9b-a6232aee65d3","Type":"ContainerStarted","Data":"1cb8cbe3f226ec811b9135068cf5d114e6f0fcba209b47bb732afeeef39fa442"} Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.265666 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.26565139 podStartE2EDuration="4.26565139s" podCreationTimestamp="2025-11-28 17:20:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:16.243867712 +0000 UTC m=+1305.502167767" watchObservedRunningTime="2025-11-28 17:20:16.26565139 +0000 UTC m=+1305.523951435" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.797259 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.925872 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-config-data\") pod \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.926212 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-scripts\") pod \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.926306 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-combined-ca-bundle\") pod \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.926366 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-log-httpd\") pod \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.926532 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-sg-core-conf-yaml\") pod \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.926560 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-run-httpd\") pod \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.926586 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pg5t4\" (UniqueName: \"kubernetes.io/projected/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-kube-api-access-pg5t4\") pod \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\" (UID: \"dccb7af3-8bba-460b-a7c3-cb0d23e4013f\") " Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.926891 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dccb7af3-8bba-460b-a7c3-cb0d23e4013f" (UID: "dccb7af3-8bba-460b-a7c3-cb0d23e4013f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.926903 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dccb7af3-8bba-460b-a7c3-cb0d23e4013f" (UID: "dccb7af3-8bba-460b-a7c3-cb0d23e4013f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.927369 4710 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.927390 4710 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.932104 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-scripts" (OuterVolumeSpecName: "scripts") pod "dccb7af3-8bba-460b-a7c3-cb0d23e4013f" (UID: "dccb7af3-8bba-460b-a7c3-cb0d23e4013f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.933894 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-kube-api-access-pg5t4" (OuterVolumeSpecName: "kube-api-access-pg5t4") pod "dccb7af3-8bba-460b-a7c3-cb0d23e4013f" (UID: "dccb7af3-8bba-460b-a7c3-cb0d23e4013f"). InnerVolumeSpecName "kube-api-access-pg5t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:16 crc kubenswrapper[4710]: I1128 17:20:16.990977 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dccb7af3-8bba-460b-a7c3-cb0d23e4013f" (UID: "dccb7af3-8bba-460b-a7c3-cb0d23e4013f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.029620 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.029661 4710 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.029677 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pg5t4\" (UniqueName: \"kubernetes.io/projected/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-kube-api-access-pg5t4\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.041951 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dccb7af3-8bba-460b-a7c3-cb0d23e4013f" (UID: "dccb7af3-8bba-460b-a7c3-cb0d23e4013f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.050351 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-config-data" (OuterVolumeSpecName: "config-data") pod "dccb7af3-8bba-460b-a7c3-cb0d23e4013f" (UID: "dccb7af3-8bba-460b-a7c3-cb0d23e4013f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.132002 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.132291 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dccb7af3-8bba-460b-a7c3-cb0d23e4013f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.239585 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"02dda5a0-8c02-4b9e-a122-573bc14ef753","Type":"ContainerStarted","Data":"90098e5d74e931b6cf2fe2496c020f6e2c9439aaffeeccc8497076c7d4c6af9f"} Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.241218 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.245149 4710 generic.go:334] "Generic (PLEG): container finished" podID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerID="bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66" exitCode=0 Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.245834 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerDied","Data":"bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66"} Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.245906 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dccb7af3-8bba-460b-a7c3-cb0d23e4013f","Type":"ContainerDied","Data":"e9aefd0a3332f22b83671025f055dc9b72a9e0cc70e9bf658736fe35fd3e7d94"} Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.245855 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.245962 4710 scope.go:117] "RemoveContainer" containerID="42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.264075 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.264050612 podStartE2EDuration="3.264050612s" podCreationTimestamp="2025-11-28 17:20:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:17.261846054 +0000 UTC m=+1306.520146099" watchObservedRunningTime="2025-11-28 17:20:17.264050612 +0000 UTC m=+1306.522350657" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.289080 4710 scope.go:117] "RemoveContainer" containerID="4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.305970 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.319104 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.327994 4710 scope.go:117] "RemoveContainer" containerID="656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.334378 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:17 crc kubenswrapper[4710]: E1128 17:20:17.334998 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="ceilometer-central-agent" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.335023 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="ceilometer-central-agent" Nov 28 17:20:17 crc kubenswrapper[4710]: E1128 17:20:17.335039 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="ceilometer-notification-agent" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.335047 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="ceilometer-notification-agent" Nov 28 17:20:17 crc kubenswrapper[4710]: E1128 17:20:17.335063 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="proxy-httpd" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.335071 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="proxy-httpd" Nov 28 17:20:17 crc kubenswrapper[4710]: E1128 17:20:17.335100 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="sg-core" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.335107 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="sg-core" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.335332 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="ceilometer-central-agent" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.335363 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="proxy-httpd" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.335386 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="sg-core" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.335404 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" containerName="ceilometer-notification-agent" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.337820 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.350486 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.350630 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.358434 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.377095 4710 scope.go:117] "RemoveContainer" containerID="bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.414193 4710 scope.go:117] "RemoveContainer" containerID="42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49" Nov 28 17:20:17 crc kubenswrapper[4710]: E1128 17:20:17.414634 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49\": container with ID starting with 42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49 not found: ID does not exist" containerID="42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.414679 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49"} err="failed to get container status \"42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49\": rpc error: code = NotFound desc = could not find container \"42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49\": container with ID starting with 42c3e6545b7eaf395a2411efeb2752b9c2ec0e32efec0b8215de9f0d79dd6a49 not found: ID does not exist" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.414730 4710 scope.go:117] "RemoveContainer" containerID="4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20" Nov 28 17:20:17 crc kubenswrapper[4710]: E1128 17:20:17.415189 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20\": container with ID starting with 4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20 not found: ID does not exist" containerID="4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.415221 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20"} err="failed to get container status \"4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20\": rpc error: code = NotFound desc = could not find container \"4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20\": container with ID starting with 4b86a3ce985900b20bb535843ec57d50e773511c858e951f44ba0dec85249f20 not found: ID does not exist" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.415244 4710 scope.go:117] "RemoveContainer" containerID="656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6" Nov 28 17:20:17 crc kubenswrapper[4710]: E1128 17:20:17.415751 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6\": container with ID starting with 656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6 not found: ID does not exist" containerID="656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.415779 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6"} err="failed to get container status \"656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6\": rpc error: code = NotFound desc = could not find container \"656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6\": container with ID starting with 656c4f37297daedbe195e778ba05695a87bb3ca2e6cbc726b53e1362e2940fb6 not found: ID does not exist" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.415806 4710 scope.go:117] "RemoveContainer" containerID="bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66" Nov 28 17:20:17 crc kubenswrapper[4710]: E1128 17:20:17.416258 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66\": container with ID starting with bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66 not found: ID does not exist" containerID="bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.416288 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66"} err="failed to get container status \"bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66\": rpc error: code = NotFound desc = could not find container \"bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66\": container with ID starting with bfd41b1209495aabadb869041870b771c51a4df7b1f758d6ddcabc94983e9c66 not found: ID does not exist" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.438187 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-log-httpd\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.438346 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfv6g\" (UniqueName: \"kubernetes.io/projected/7c96c986-033f-471f-9f5c-82699b7811e7-kube-api-access-xfv6g\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.438387 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-run-httpd\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.438451 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.438524 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-scripts\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.438578 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.438639 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-config-data\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.540883 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-scripts\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.540999 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.541090 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-config-data\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.541133 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-log-httpd\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.541268 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfv6g\" (UniqueName: \"kubernetes.io/projected/7c96c986-033f-471f-9f5c-82699b7811e7-kube-api-access-xfv6g\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.541307 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-run-httpd\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.541388 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.542016 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-log-httpd\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.542024 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-run-httpd\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.547811 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.548889 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.548925 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-config-data\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.551369 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-scripts\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.567959 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfv6g\" (UniqueName: \"kubernetes.io/projected/7c96c986-033f-471f-9f5c-82699b7811e7-kube-api-access-xfv6g\") pod \"ceilometer-0\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " pod="openstack/ceilometer-0" Nov 28 17:20:17 crc kubenswrapper[4710]: I1128 17:20:17.682347 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.090581 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w7rm2"] Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.092488 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.094802 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.094854 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-d89mb" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.095038 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.121897 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w7rm2"] Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.153888 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-config-data\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.154019 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgmj2\" (UniqueName: \"kubernetes.io/projected/f4894395-7727-4595-9a50-7a1b2b55a525-kube-api-access-mgmj2\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.154053 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-scripts\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.154166 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.234903 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.263529 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-config-data\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.263677 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgmj2\" (UniqueName: \"kubernetes.io/projected/f4894395-7727-4595-9a50-7a1b2b55a525-kube-api-access-mgmj2\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.263722 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-scripts\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.263901 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.273943 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-scripts\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.314891 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-config-data\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.315682 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.325132 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerStarted","Data":"27188249e583d7c0345264a2e7200bde46ae05ae25089b51484dd17651d51298"} Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.330571 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgmj2\" (UniqueName: \"kubernetes.io/projected/f4894395-7727-4595-9a50-7a1b2b55a525-kube-api-access-mgmj2\") pod \"nova-cell0-conductor-db-sync-w7rm2\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.416300 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.452677 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:18 crc kubenswrapper[4710]: I1128 17:20:18.931466 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w7rm2"] Nov 28 17:20:19 crc kubenswrapper[4710]: I1128 17:20:19.153229 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dccb7af3-8bba-460b-a7c3-cb0d23e4013f" path="/var/lib/kubelet/pods/dccb7af3-8bba-460b-a7c3-cb0d23e4013f/volumes" Nov 28 17:20:19 crc kubenswrapper[4710]: I1128 17:20:19.342298 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" event={"ID":"f4894395-7727-4595-9a50-7a1b2b55a525","Type":"ContainerStarted","Data":"2344eb8bc13742268bf153c55dad3eb6902cbc397a01a71f476a24118708f65a"} Nov 28 17:20:19 crc kubenswrapper[4710]: I1128 17:20:19.343581 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerStarted","Data":"d8f699ae2aa2a897d725d7c1ac900654c4b6c3811991ebf35902aa85557f9b51"} Nov 28 17:20:19 crc kubenswrapper[4710]: I1128 17:20:19.424931 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 17:20:19 crc kubenswrapper[4710]: I1128 17:20:19.424986 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 28 17:20:19 crc kubenswrapper[4710]: I1128 17:20:19.466601 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 17:20:19 crc kubenswrapper[4710]: I1128 17:20:19.467565 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 28 17:20:20 crc kubenswrapper[4710]: I1128 17:20:20.362355 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerStarted","Data":"ee70ce60d58d2525220857f4e38d6811d04d073b2ad3fb0e92adb74b074b3d41"} Nov 28 17:20:20 crc kubenswrapper[4710]: I1128 17:20:20.362668 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 17:20:20 crc kubenswrapper[4710]: I1128 17:20:20.362810 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 28 17:20:21 crc kubenswrapper[4710]: I1128 17:20:21.231859 4710 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod18baf4b3-8f80-42fa-8291-377b5ae88a92"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod18baf4b3-8f80-42fa-8291-377b5ae88a92] : Timed out while waiting for systemd to remove kubepods-besteffort-pod18baf4b3_8f80_42fa_8291_377b5ae88a92.slice" Nov 28 17:20:21 crc kubenswrapper[4710]: I1128 17:20:21.393696 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerStarted","Data":"4ca4b349ce3e94d57d1355cb87aa9c606aad904a2a877602ca3afaae8864905d"} Nov 28 17:20:22 crc kubenswrapper[4710]: I1128 17:20:22.402813 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:20:22 crc kubenswrapper[4710]: I1128 17:20:22.403097 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.011668 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.013615 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.074119 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.074179 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.158580 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.160919 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.414987 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerStarted","Data":"6c8f9ceed0e51bd662fe08747b4d2a8cbef6711b3fd3e0997295635eddb38f5c"} Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.415109 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="ceilometer-central-agent" containerID="cri-o://d8f699ae2aa2a897d725d7c1ac900654c4b6c3811991ebf35902aa85557f9b51" gracePeriod=30 Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.415206 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="proxy-httpd" containerID="cri-o://6c8f9ceed0e51bd662fe08747b4d2a8cbef6711b3fd3e0997295635eddb38f5c" gracePeriod=30 Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.415261 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="sg-core" containerID="cri-o://4ca4b349ce3e94d57d1355cb87aa9c606aad904a2a877602ca3afaae8864905d" gracePeriod=30 Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.415308 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="ceilometer-notification-agent" containerID="cri-o://ee70ce60d58d2525220857f4e38d6811d04d073b2ad3fb0e92adb74b074b3d41" gracePeriod=30 Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.415809 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.415830 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:23 crc kubenswrapper[4710]: I1128 17:20:23.460036 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.969420213 podStartE2EDuration="6.460013061s" podCreationTimestamp="2025-11-28 17:20:17 +0000 UTC" firstStartedPulling="2025-11-28 17:20:18.26680262 +0000 UTC m=+1307.525102665" lastFinishedPulling="2025-11-28 17:20:22.757395468 +0000 UTC m=+1312.015695513" observedRunningTime="2025-11-28 17:20:23.449207345 +0000 UTC m=+1312.707507390" watchObservedRunningTime="2025-11-28 17:20:23.460013061 +0000 UTC m=+1312.718313106" Nov 28 17:20:24 crc kubenswrapper[4710]: I1128 17:20:24.437956 4710 generic.go:334] "Generic (PLEG): container finished" podID="7c96c986-033f-471f-9f5c-82699b7811e7" containerID="6c8f9ceed0e51bd662fe08747b4d2a8cbef6711b3fd3e0997295635eddb38f5c" exitCode=0 Nov 28 17:20:24 crc kubenswrapper[4710]: I1128 17:20:24.438310 4710 generic.go:334] "Generic (PLEG): container finished" podID="7c96c986-033f-471f-9f5c-82699b7811e7" containerID="4ca4b349ce3e94d57d1355cb87aa9c606aad904a2a877602ca3afaae8864905d" exitCode=2 Nov 28 17:20:24 crc kubenswrapper[4710]: I1128 17:20:24.438323 4710 generic.go:334] "Generic (PLEG): container finished" podID="7c96c986-033f-471f-9f5c-82699b7811e7" containerID="ee70ce60d58d2525220857f4e38d6811d04d073b2ad3fb0e92adb74b074b3d41" exitCode=0 Nov 28 17:20:24 crc kubenswrapper[4710]: I1128 17:20:24.439993 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerDied","Data":"6c8f9ceed0e51bd662fe08747b4d2a8cbef6711b3fd3e0997295635eddb38f5c"} Nov 28 17:20:24 crc kubenswrapper[4710]: I1128 17:20:24.440068 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerDied","Data":"4ca4b349ce3e94d57d1355cb87aa9c606aad904a2a877602ca3afaae8864905d"} Nov 28 17:20:24 crc kubenswrapper[4710]: I1128 17:20:24.440082 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerDied","Data":"ee70ce60d58d2525220857f4e38d6811d04d073b2ad3fb0e92adb74b074b3d41"} Nov 28 17:20:25 crc kubenswrapper[4710]: I1128 17:20:25.453575 4710 generic.go:334] "Generic (PLEG): container finished" podID="7c96c986-033f-471f-9f5c-82699b7811e7" containerID="d8f699ae2aa2a897d725d7c1ac900654c4b6c3811991ebf35902aa85557f9b51" exitCode=0 Nov 28 17:20:25 crc kubenswrapper[4710]: I1128 17:20:25.453625 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerDied","Data":"d8f699ae2aa2a897d725d7c1ac900654c4b6c3811991ebf35902aa85557f9b51"} Nov 28 17:20:25 crc kubenswrapper[4710]: I1128 17:20:25.575584 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:25 crc kubenswrapper[4710]: I1128 17:20:25.576072 4710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 28 17:20:25 crc kubenswrapper[4710]: I1128 17:20:25.586856 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 28 17:20:26 crc kubenswrapper[4710]: I1128 17:20:26.731530 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.046375 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.124531 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-config-data\") pod \"7c96c986-033f-471f-9f5c-82699b7811e7\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.124679 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfv6g\" (UniqueName: \"kubernetes.io/projected/7c96c986-033f-471f-9f5c-82699b7811e7-kube-api-access-xfv6g\") pod \"7c96c986-033f-471f-9f5c-82699b7811e7\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.124752 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-log-httpd\") pod \"7c96c986-033f-471f-9f5c-82699b7811e7\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.124965 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-sg-core-conf-yaml\") pod \"7c96c986-033f-471f-9f5c-82699b7811e7\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.125003 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-scripts\") pod \"7c96c986-033f-471f-9f5c-82699b7811e7\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.125045 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-run-httpd\") pod \"7c96c986-033f-471f-9f5c-82699b7811e7\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.125134 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-combined-ca-bundle\") pod \"7c96c986-033f-471f-9f5c-82699b7811e7\" (UID: \"7c96c986-033f-471f-9f5c-82699b7811e7\") " Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.125675 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7c96c986-033f-471f-9f5c-82699b7811e7" (UID: "7c96c986-033f-471f-9f5c-82699b7811e7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.125857 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7c96c986-033f-471f-9f5c-82699b7811e7" (UID: "7c96c986-033f-471f-9f5c-82699b7811e7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.126294 4710 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.126325 4710 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7c96c986-033f-471f-9f5c-82699b7811e7-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.130243 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-scripts" (OuterVolumeSpecName: "scripts") pod "7c96c986-033f-471f-9f5c-82699b7811e7" (UID: "7c96c986-033f-471f-9f5c-82699b7811e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.130279 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c96c986-033f-471f-9f5c-82699b7811e7-kube-api-access-xfv6g" (OuterVolumeSpecName: "kube-api-access-xfv6g") pod "7c96c986-033f-471f-9f5c-82699b7811e7" (UID: "7c96c986-033f-471f-9f5c-82699b7811e7"). InnerVolumeSpecName "kube-api-access-xfv6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.155309 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7c96c986-033f-471f-9f5c-82699b7811e7" (UID: "7c96c986-033f-471f-9f5c-82699b7811e7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.204949 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c96c986-033f-471f-9f5c-82699b7811e7" (UID: "7c96c986-033f-471f-9f5c-82699b7811e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.228221 4710 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.228253 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.228263 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.228273 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfv6g\" (UniqueName: \"kubernetes.io/projected/7c96c986-033f-471f-9f5c-82699b7811e7-kube-api-access-xfv6g\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.232170 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-config-data" (OuterVolumeSpecName: "config-data") pod "7c96c986-033f-471f-9f5c-82699b7811e7" (UID: "7c96c986-033f-471f-9f5c-82699b7811e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.330185 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c96c986-033f-471f-9f5c-82699b7811e7-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.496406 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" event={"ID":"f4894395-7727-4595-9a50-7a1b2b55a525","Type":"ContainerStarted","Data":"9f783ef69d5d54358a6df9c254d295ef46b1c24e4323166f8ccffa1c35419227"} Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.499226 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7c96c986-033f-471f-9f5c-82699b7811e7","Type":"ContainerDied","Data":"27188249e583d7c0345264a2e7200bde46ae05ae25089b51484dd17651d51298"} Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.499279 4710 scope.go:117] "RemoveContainer" containerID="6c8f9ceed0e51bd662fe08747b4d2a8cbef6711b3fd3e0997295635eddb38f5c" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.499299 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.517701 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" podStartSLOduration=1.650539687 podStartE2EDuration="11.517679768s" podCreationTimestamp="2025-11-28 17:20:18 +0000 UTC" firstStartedPulling="2025-11-28 17:20:18.947193772 +0000 UTC m=+1308.205493817" lastFinishedPulling="2025-11-28 17:20:28.814333853 +0000 UTC m=+1318.072633898" observedRunningTime="2025-11-28 17:20:29.516267464 +0000 UTC m=+1318.774567529" watchObservedRunningTime="2025-11-28 17:20:29.517679768 +0000 UTC m=+1318.775979813" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.520441 4710 scope.go:117] "RemoveContainer" containerID="4ca4b349ce3e94d57d1355cb87aa9c606aad904a2a877602ca3afaae8864905d" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.542809 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.559145 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.562280 4710 scope.go:117] "RemoveContainer" containerID="ee70ce60d58d2525220857f4e38d6811d04d073b2ad3fb0e92adb74b074b3d41" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.571959 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:29 crc kubenswrapper[4710]: E1128 17:20:29.572502 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="ceilometer-central-agent" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.572518 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="ceilometer-central-agent" Nov 28 17:20:29 crc kubenswrapper[4710]: E1128 17:20:29.572548 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="proxy-httpd" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.572555 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="proxy-httpd" Nov 28 17:20:29 crc kubenswrapper[4710]: E1128 17:20:29.572593 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="ceilometer-notification-agent" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.572600 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="ceilometer-notification-agent" Nov 28 17:20:29 crc kubenswrapper[4710]: E1128 17:20:29.572619 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="sg-core" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.572624 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="sg-core" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.572914 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="ceilometer-notification-agent" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.572962 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="sg-core" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.572980 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="proxy-httpd" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.572992 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" containerName="ceilometer-central-agent" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.575200 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.578620 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.579074 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.592818 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.629248 4710 scope.go:117] "RemoveContainer" containerID="d8f699ae2aa2a897d725d7c1ac900654c4b6c3811991ebf35902aa85557f9b51" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.636590 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-scripts\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.636634 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttsn7\" (UniqueName: \"kubernetes.io/projected/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-kube-api-access-ttsn7\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.636667 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-log-httpd\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.636700 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.636807 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-config-data\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.636841 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.636889 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-run-httpd\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.743193 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-scripts\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.743307 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttsn7\" (UniqueName: \"kubernetes.io/projected/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-kube-api-access-ttsn7\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.743377 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-log-httpd\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.743447 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.743604 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-config-data\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.743678 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.743824 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-run-httpd\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.744680 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-run-httpd\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.749260 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-scripts\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.750044 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-log-httpd\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.761642 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.768452 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.769938 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-config-data\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.770208 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttsn7\" (UniqueName: \"kubernetes.io/projected/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-kube-api-access-ttsn7\") pod \"ceilometer-0\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " pod="openstack/ceilometer-0" Nov 28 17:20:29 crc kubenswrapper[4710]: I1128 17:20:29.897928 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:30 crc kubenswrapper[4710]: I1128 17:20:30.347676 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:30 crc kubenswrapper[4710]: W1128 17:20:30.354641 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6eb035cc_418c_4ec5_b1b7_4acdd6e6c8a2.slice/crio-1da39883b6b8f2fe56df83a47ae0382cba4e1915943a918b8fc55c389173752d WatchSource:0}: Error finding container 1da39883b6b8f2fe56df83a47ae0382cba4e1915943a918b8fc55c389173752d: Status 404 returned error can't find the container with id 1da39883b6b8f2fe56df83a47ae0382cba4e1915943a918b8fc55c389173752d Nov 28 17:20:30 crc kubenswrapper[4710]: I1128 17:20:30.524519 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerStarted","Data":"1da39883b6b8f2fe56df83a47ae0382cba4e1915943a918b8fc55c389173752d"} Nov 28 17:20:31 crc kubenswrapper[4710]: I1128 17:20:31.177101 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c96c986-033f-471f-9f5c-82699b7811e7" path="/var/lib/kubelet/pods/7c96c986-033f-471f-9f5c-82699b7811e7/volumes" Nov 28 17:20:32 crc kubenswrapper[4710]: I1128 17:20:32.543790 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerStarted","Data":"8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8"} Nov 28 17:20:33 crc kubenswrapper[4710]: I1128 17:20:33.555645 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerStarted","Data":"f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2"} Nov 28 17:20:35 crc kubenswrapper[4710]: I1128 17:20:35.602464 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerStarted","Data":"451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987"} Nov 28 17:20:36 crc kubenswrapper[4710]: I1128 17:20:36.615818 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerStarted","Data":"e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705"} Nov 28 17:20:36 crc kubenswrapper[4710]: I1128 17:20:36.616263 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:20:36 crc kubenswrapper[4710]: I1128 17:20:36.640168 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.059586284 podStartE2EDuration="7.640146553s" podCreationTimestamp="2025-11-28 17:20:29 +0000 UTC" firstStartedPulling="2025-11-28 17:20:30.360937955 +0000 UTC m=+1319.619238000" lastFinishedPulling="2025-11-28 17:20:35.941498224 +0000 UTC m=+1325.199798269" observedRunningTime="2025-11-28 17:20:36.634190737 +0000 UTC m=+1325.892490782" watchObservedRunningTime="2025-11-28 17:20:36.640146553 +0000 UTC m=+1325.898446598" Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.400018 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.401008 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="proxy-httpd" containerID="cri-o://e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705" gracePeriod=30 Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.401092 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="sg-core" containerID="cri-o://451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987" gracePeriod=30 Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.401252 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="ceilometer-notification-agent" containerID="cri-o://f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2" gracePeriod=30 Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.401401 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="ceilometer-central-agent" containerID="cri-o://8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8" gracePeriod=30 Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.649678 4710 generic.go:334] "Generic (PLEG): container finished" podID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerID="e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705" exitCode=0 Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.649715 4710 generic.go:334] "Generic (PLEG): container finished" podID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerID="451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987" exitCode=2 Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.649735 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerDied","Data":"e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705"} Nov 28 17:20:39 crc kubenswrapper[4710]: I1128 17:20:39.649775 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerDied","Data":"451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987"} Nov 28 17:20:40 crc kubenswrapper[4710]: I1128 17:20:40.669428 4710 generic.go:334] "Generic (PLEG): container finished" podID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerID="f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2" exitCode=0 Nov 28 17:20:40 crc kubenswrapper[4710]: I1128 17:20:40.669726 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerDied","Data":"f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2"} Nov 28 17:20:41 crc kubenswrapper[4710]: I1128 17:20:41.681670 4710 generic.go:334] "Generic (PLEG): container finished" podID="f4894395-7727-4595-9a50-7a1b2b55a525" containerID="9f783ef69d5d54358a6df9c254d295ef46b1c24e4323166f8ccffa1c35419227" exitCode=0 Nov 28 17:20:41 crc kubenswrapper[4710]: I1128 17:20:41.681765 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" event={"ID":"f4894395-7727-4595-9a50-7a1b2b55a525","Type":"ContainerDied","Data":"9f783ef69d5d54358a6df9c254d295ef46b1c24e4323166f8ccffa1c35419227"} Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.393869 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.551244 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttsn7\" (UniqueName: \"kubernetes.io/projected/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-kube-api-access-ttsn7\") pod \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.551376 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-scripts\") pod \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.551502 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-config-data\") pod \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.551573 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-log-httpd\") pod \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.552230 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" (UID: "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.552634 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" (UID: "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.552344 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-run-httpd\") pod \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.552882 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-combined-ca-bundle\") pod \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.553231 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-sg-core-conf-yaml\") pod \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\" (UID: \"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2\") " Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.553980 4710 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.554024 4710 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.556922 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-kube-api-access-ttsn7" (OuterVolumeSpecName: "kube-api-access-ttsn7") pod "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" (UID: "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2"). InnerVolumeSpecName "kube-api-access-ttsn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.558328 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-scripts" (OuterVolumeSpecName: "scripts") pod "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" (UID: "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.591017 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" (UID: "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.638497 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" (UID: "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.656891 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.656957 4710 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.656971 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttsn7\" (UniqueName: \"kubernetes.io/projected/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-kube-api-access-ttsn7\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.656985 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.665785 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-config-data" (OuterVolumeSpecName: "config-data") pod "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" (UID: "6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.695887 4710 generic.go:334] "Generic (PLEG): container finished" podID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerID="8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8" exitCode=0 Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.695975 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.695981 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerDied","Data":"8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8"} Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.696032 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2","Type":"ContainerDied","Data":"1da39883b6b8f2fe56df83a47ae0382cba4e1915943a918b8fc55c389173752d"} Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.696055 4710 scope.go:117] "RemoveContainer" containerID="e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.728371 4710 scope.go:117] "RemoveContainer" containerID="451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.738159 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.756046 4710 scope.go:117] "RemoveContainer" containerID="f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.758152 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.758190 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.774906 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:42 crc kubenswrapper[4710]: E1128 17:20:42.775338 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="sg-core" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.775358 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="sg-core" Nov 28 17:20:42 crc kubenswrapper[4710]: E1128 17:20:42.775401 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="proxy-httpd" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.775408 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="proxy-httpd" Nov 28 17:20:42 crc kubenswrapper[4710]: E1128 17:20:42.775422 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="ceilometer-central-agent" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.775428 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="ceilometer-central-agent" Nov 28 17:20:42 crc kubenswrapper[4710]: E1128 17:20:42.775442 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="ceilometer-notification-agent" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.775449 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="ceilometer-notification-agent" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.775643 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="proxy-httpd" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.775667 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="sg-core" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.775681 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="ceilometer-central-agent" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.775694 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" containerName="ceilometer-notification-agent" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.777517 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.780007 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.781024 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.789061 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.815983 4710 scope.go:117] "RemoveContainer" containerID="8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.859466 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.859634 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.859667 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.859710 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.859737 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-scripts\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.859787 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-config-data\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.859832 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpsx\" (UniqueName: \"kubernetes.io/projected/04356f71-ec3f-4393-8c94-bf010eeea8ef-kube-api-access-2fpsx\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.883746 4710 scope.go:117] "RemoveContainer" containerID="e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705" Nov 28 17:20:42 crc kubenswrapper[4710]: E1128 17:20:42.884235 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705\": container with ID starting with e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705 not found: ID does not exist" containerID="e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.884262 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705"} err="failed to get container status \"e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705\": rpc error: code = NotFound desc = could not find container \"e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705\": container with ID starting with e1bd1e2108eb0152e7140fba2da971e0305f5bd2423591c710c3c7bf87f60705 not found: ID does not exist" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.884282 4710 scope.go:117] "RemoveContainer" containerID="451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987" Nov 28 17:20:42 crc kubenswrapper[4710]: E1128 17:20:42.884590 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987\": container with ID starting with 451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987 not found: ID does not exist" containerID="451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.884607 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987"} err="failed to get container status \"451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987\": rpc error: code = NotFound desc = could not find container \"451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987\": container with ID starting with 451ae5a67a6cb861959353dc9636bcf8d3ea0cc92984d84e1b36e88cd5721987 not found: ID does not exist" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.884621 4710 scope.go:117] "RemoveContainer" containerID="f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2" Nov 28 17:20:42 crc kubenswrapper[4710]: E1128 17:20:42.884891 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2\": container with ID starting with f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2 not found: ID does not exist" containerID="f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.884929 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2"} err="failed to get container status \"f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2\": rpc error: code = NotFound desc = could not find container \"f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2\": container with ID starting with f751b60dd5114c1fee287529c50c9d2d6c1fca840defe934f99ab46306237fe2 not found: ID does not exist" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.884958 4710 scope.go:117] "RemoveContainer" containerID="8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8" Nov 28 17:20:42 crc kubenswrapper[4710]: E1128 17:20:42.885315 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8\": container with ID starting with 8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8 not found: ID does not exist" containerID="8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.885337 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8"} err="failed to get container status \"8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8\": rpc error: code = NotFound desc = could not find container \"8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8\": container with ID starting with 8e688247facd4d94e48c2da26698591c0541f55d62707423470ec7912abaa3e8 not found: ID does not exist" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.961788 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.961856 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.961899 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.961926 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-scripts\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.961959 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-config-data\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.962005 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpsx\" (UniqueName: \"kubernetes.io/projected/04356f71-ec3f-4393-8c94-bf010eeea8ef-kube-api-access-2fpsx\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.962077 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.962873 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-run-httpd\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.962905 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-log-httpd\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.967525 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.969022 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.972723 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-config-data\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.976455 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-scripts\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:42 crc kubenswrapper[4710]: I1128 17:20:42.978737 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpsx\" (UniqueName: \"kubernetes.io/projected/04356f71-ec3f-4393-8c94-bf010eeea8ef-kube-api-access-2fpsx\") pod \"ceilometer-0\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " pod="openstack/ceilometer-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.066426 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.153201 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2" path="/var/lib/kubelet/pods/6eb035cc-418c-4ec5-b1b7-4acdd6e6c8a2/volumes" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.163267 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.165042 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-config-data\") pod \"f4894395-7727-4595-9a50-7a1b2b55a525\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.165127 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-scripts\") pod \"f4894395-7727-4595-9a50-7a1b2b55a525\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.165209 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-combined-ca-bundle\") pod \"f4894395-7727-4595-9a50-7a1b2b55a525\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.165276 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgmj2\" (UniqueName: \"kubernetes.io/projected/f4894395-7727-4595-9a50-7a1b2b55a525-kube-api-access-mgmj2\") pod \"f4894395-7727-4595-9a50-7a1b2b55a525\" (UID: \"f4894395-7727-4595-9a50-7a1b2b55a525\") " Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.176992 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-scripts" (OuterVolumeSpecName: "scripts") pod "f4894395-7727-4595-9a50-7a1b2b55a525" (UID: "f4894395-7727-4595-9a50-7a1b2b55a525"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.177061 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4894395-7727-4595-9a50-7a1b2b55a525-kube-api-access-mgmj2" (OuterVolumeSpecName: "kube-api-access-mgmj2") pod "f4894395-7727-4595-9a50-7a1b2b55a525" (UID: "f4894395-7727-4595-9a50-7a1b2b55a525"). InnerVolumeSpecName "kube-api-access-mgmj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.201479 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-config-data" (OuterVolumeSpecName: "config-data") pod "f4894395-7727-4595-9a50-7a1b2b55a525" (UID: "f4894395-7727-4595-9a50-7a1b2b55a525"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.204347 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4894395-7727-4595-9a50-7a1b2b55a525" (UID: "f4894395-7727-4595-9a50-7a1b2b55a525"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.267042 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgmj2\" (UniqueName: \"kubernetes.io/projected/f4894395-7727-4595-9a50-7a1b2b55a525-kube-api-access-mgmj2\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.267069 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.267079 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.267089 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4894395-7727-4595-9a50-7a1b2b55a525-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.344002 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.344057 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.670852 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.710189 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.710177 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w7rm2" event={"ID":"f4894395-7727-4595-9a50-7a1b2b55a525","Type":"ContainerDied","Data":"2344eb8bc13742268bf153c55dad3eb6902cbc397a01a71f476a24118708f65a"} Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.710321 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2344eb8bc13742268bf153c55dad3eb6902cbc397a01a71f476a24118708f65a" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.717011 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerStarted","Data":"4200e2d10556d6f78806791f1da56d5bc29a06ef04b72337d43752488dfef16d"} Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.802951 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 17:20:43 crc kubenswrapper[4710]: E1128 17:20:43.803696 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4894395-7727-4595-9a50-7a1b2b55a525" containerName="nova-cell0-conductor-db-sync" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.803854 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4894395-7727-4595-9a50-7a1b2b55a525" containerName="nova-cell0-conductor-db-sync" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.804291 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4894395-7727-4595-9a50-7a1b2b55a525" containerName="nova-cell0-conductor-db-sync" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.805353 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.813792 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.816063 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-d89mb" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.816141 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.879923 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4feeed2e-20e0-49a9-8448-2805a2f332e2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.880009 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4n9\" (UniqueName: \"kubernetes.io/projected/4feeed2e-20e0-49a9-8448-2805a2f332e2-kube-api-access-ww4n9\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.880256 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4feeed2e-20e0-49a9-8448-2805a2f332e2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.982800 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4feeed2e-20e0-49a9-8448-2805a2f332e2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.982931 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4feeed2e-20e0-49a9-8448-2805a2f332e2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.982967 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4n9\" (UniqueName: \"kubernetes.io/projected/4feeed2e-20e0-49a9-8448-2805a2f332e2-kube-api-access-ww4n9\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.988613 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4feeed2e-20e0-49a9-8448-2805a2f332e2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:43 crc kubenswrapper[4710]: I1128 17:20:43.998320 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4feeed2e-20e0-49a9-8448-2805a2f332e2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:44 crc kubenswrapper[4710]: I1128 17:20:44.017173 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4n9\" (UniqueName: \"kubernetes.io/projected/4feeed2e-20e0-49a9-8448-2805a2f332e2-kube-api-access-ww4n9\") pod \"nova-cell0-conductor-0\" (UID: \"4feeed2e-20e0-49a9-8448-2805a2f332e2\") " pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:44 crc kubenswrapper[4710]: I1128 17:20:44.162578 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:44 crc kubenswrapper[4710]: I1128 17:20:44.698447 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 28 17:20:44 crc kubenswrapper[4710]: W1128 17:20:44.706130 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4feeed2e_20e0_49a9_8448_2805a2f332e2.slice/crio-3c8822326f7965ae8e217fc59d0bf434d02c15c44e457d6f98d4bfa15811f07a WatchSource:0}: Error finding container 3c8822326f7965ae8e217fc59d0bf434d02c15c44e457d6f98d4bfa15811f07a: Status 404 returned error can't find the container with id 3c8822326f7965ae8e217fc59d0bf434d02c15c44e457d6f98d4bfa15811f07a Nov 28 17:20:44 crc kubenswrapper[4710]: I1128 17:20:44.729951 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4feeed2e-20e0-49a9-8448-2805a2f332e2","Type":"ContainerStarted","Data":"3c8822326f7965ae8e217fc59d0bf434d02c15c44e457d6f98d4bfa15811f07a"} Nov 28 17:20:44 crc kubenswrapper[4710]: I1128 17:20:44.732018 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerStarted","Data":"533973b642b1f014a940156e6c1b4aa3c4dec6fabb0e3757132c20bab98e2d60"} Nov 28 17:20:45 crc kubenswrapper[4710]: I1128 17:20:45.743428 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerStarted","Data":"2947b1c1150b7817ecbf311e4826b7ab4f151671eb5d9880dc20b6be97814900"} Nov 28 17:20:45 crc kubenswrapper[4710]: I1128 17:20:45.746316 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"4feeed2e-20e0-49a9-8448-2805a2f332e2","Type":"ContainerStarted","Data":"df04d99a8553a927ba89ef369392937c94256671e6ba3b12bd52fb3cff148d05"} Nov 28 17:20:45 crc kubenswrapper[4710]: I1128 17:20:45.747395 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:45 crc kubenswrapper[4710]: I1128 17:20:45.774995 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.774974487 podStartE2EDuration="2.774974487s" podCreationTimestamp="2025-11-28 17:20:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:45.762303713 +0000 UTC m=+1335.020603768" watchObservedRunningTime="2025-11-28 17:20:45.774974487 +0000 UTC m=+1335.033274532" Nov 28 17:20:46 crc kubenswrapper[4710]: I1128 17:20:46.764887 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerStarted","Data":"aecbc9a9c50fa805bb468b566c765f95bc1f331793e3ff45291b0088eeb3b4ab"} Nov 28 17:20:47 crc kubenswrapper[4710]: I1128 17:20:47.781171 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerStarted","Data":"b9041b7e9256b531f54278751c8f9538fec842491115d265279d2c4cddce392a"} Nov 28 17:20:47 crc kubenswrapper[4710]: I1128 17:20:47.781635 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:20:47 crc kubenswrapper[4710]: I1128 17:20:47.813812 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.043550837 podStartE2EDuration="5.813792589s" podCreationTimestamp="2025-11-28 17:20:42 +0000 UTC" firstStartedPulling="2025-11-28 17:20:43.675825409 +0000 UTC m=+1332.934125454" lastFinishedPulling="2025-11-28 17:20:47.446067161 +0000 UTC m=+1336.704367206" observedRunningTime="2025-11-28 17:20:47.806085419 +0000 UTC m=+1337.064385484" watchObservedRunningTime="2025-11-28 17:20:47.813792589 +0000 UTC m=+1337.072092624" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.199141 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.666264 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-cgmdw"] Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.667684 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.669529 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.671194 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.677666 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cgmdw"] Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.797482 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.799209 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.802929 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.824464 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-scripts\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.824566 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-config-data\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.824622 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjkgz\" (UniqueName: \"kubernetes.io/projected/5b567f65-7af2-494a-9846-77428c466361-kube-api-access-gjkgz\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.824648 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.833651 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.879143 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.881101 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.892156 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.919519 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.926169 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-config-data\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.927173 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-scripts\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.927334 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.927513 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km8l\" (UniqueName: \"kubernetes.io/projected/af57ffaa-1d64-474e-a0a3-06aa588351bd-kube-api-access-8km8l\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.927656 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-config-data\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.927870 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjkgz\" (UniqueName: \"kubernetes.io/projected/5b567f65-7af2-494a-9846-77428c466361-kube-api-access-gjkgz\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.928001 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.935818 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.936444 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-scripts\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.940215 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-config-data\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:49 crc kubenswrapper[4710]: I1128 17:20:49.994074 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjkgz\" (UniqueName: \"kubernetes.io/projected/5b567f65-7af2-494a-9846-77428c466361-kube-api-access-gjkgz\") pod \"nova-cell0-cell-mapping-cgmdw\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.018227 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.020003 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.026935 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.031241 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-config-data\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.031312 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8km8l\" (UniqueName: \"kubernetes.io/projected/af57ffaa-1d64-474e-a0a3-06aa588351bd-kube-api-access-8km8l\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.031429 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-config-data\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.031459 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-logs\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.031513 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfq2p\" (UniqueName: \"kubernetes.io/projected/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-kube-api-access-xfq2p\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.031558 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.031607 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.036452 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.049437 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-config-data\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.067932 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.069254 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.075786 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.106573 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8km8l\" (UniqueName: \"kubernetes.io/projected/af57ffaa-1d64-474e-a0a3-06aa588351bd-kube-api-access-8km8l\") pod \"nova-scheduler-0\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " pod="openstack/nova-scheduler-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.116047 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133557 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-logs\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133605 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133636 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdjkn\" (UniqueName: \"kubernetes.io/projected/e414ac49-72cb-4155-b8f5-5ff39076cfd6-kube-api-access-hdjkn\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133661 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfq2p\" (UniqueName: \"kubernetes.io/projected/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-kube-api-access-xfq2p\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133691 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133717 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e414ac49-72cb-4155-b8f5-5ff39076cfd6-logs\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133776 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-config-data\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133820 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-config-data\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.133997 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.134665 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-logs\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.149094 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-config-data\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.149629 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.156443 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.165283 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfq2p\" (UniqueName: \"kubernetes.io/projected/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-kube-api-access-xfq2p\") pod \"nova-metadata-0\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.187489 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-px6tn"] Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.189822 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.228289 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.228901 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-px6tn"] Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291155 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291308 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-svc\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291411 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qs9h\" (UniqueName: \"kubernetes.io/projected/4686c7be-8677-4c5c-801b-dc821197c301-kube-api-access-6qs9h\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291445 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291465 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291521 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdjkn\" (UniqueName: \"kubernetes.io/projected/e414ac49-72cb-4155-b8f5-5ff39076cfd6-kube-api-access-hdjkn\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291548 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291674 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e414ac49-72cb-4155-b8f5-5ff39076cfd6-logs\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291713 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291751 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.291946 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-config\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.292058 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-config-data\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.292087 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2m5p\" (UniqueName: \"kubernetes.io/projected/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-kube-api-access-z2m5p\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.292552 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e414ac49-72cb-4155-b8f5-5ff39076cfd6-logs\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.299376 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.312162 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-config-data\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.326555 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdjkn\" (UniqueName: \"kubernetes.io/projected/e414ac49-72cb-4155-b8f5-5ff39076cfd6-kube-api-access-hdjkn\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.331720 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.409317 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.424555 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.424694 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.424733 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.425649 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-config\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.426564 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.427093 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.424896 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-config\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.427400 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2m5p\" (UniqueName: \"kubernetes.io/projected/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-kube-api-access-z2m5p\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.427518 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.427631 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-svc\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.427711 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qs9h\" (UniqueName: \"kubernetes.io/projected/4686c7be-8677-4c5c-801b-dc821197c301-kube-api-access-6qs9h\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.427739 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.440028 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-svc\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.441093 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.461900 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.462722 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qs9h\" (UniqueName: \"kubernetes.io/projected/4686c7be-8677-4c5c-801b-dc821197c301-kube-api-access-6qs9h\") pod \"dnsmasq-dns-bccf8f775-px6tn\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.463180 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2m5p\" (UniqueName: \"kubernetes.io/projected/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-kube-api-access-z2m5p\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.587360 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.614989 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.637613 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:50 crc kubenswrapper[4710]: W1128 17:20:50.976873 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf57ffaa_1d64_474e_a0a3_06aa588351bd.slice/crio-b867ea610d8579708606b208fff83ab29b07119f7ae8cb9eabb8465a6d38a396 WatchSource:0}: Error finding container b867ea610d8579708606b208fff83ab29b07119f7ae8cb9eabb8465a6d38a396: Status 404 returned error can't find the container with id b867ea610d8579708606b208fff83ab29b07119f7ae8cb9eabb8465a6d38a396 Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.980172 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:20:50 crc kubenswrapper[4710]: I1128 17:20:50.991184 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.235981 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cgmdw"] Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.324614 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-px6tn"] Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.383319 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.406970 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.425218 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wmppk"] Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.426738 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.445328 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wmppk"] Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.445623 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.445814 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.466114 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.466199 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-scripts\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.466242 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-config-data\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.466328 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8tzs\" (UniqueName: \"kubernetes.io/projected/a74731a6-5583-442c-bbe9-67f586a1c383-kube-api-access-g8tzs\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.568143 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-config-data\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.568322 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8tzs\" (UniqueName: \"kubernetes.io/projected/a74731a6-5583-442c-bbe9-67f586a1c383-kube-api-access-g8tzs\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.568899 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.569426 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-scripts\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.573868 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.578375 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-scripts\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.586333 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8tzs\" (UniqueName: \"kubernetes.io/projected/a74731a6-5583-442c-bbe9-67f586a1c383-kube-api-access-g8tzs\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.586638 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-config-data\") pod \"nova-cell1-conductor-db-sync-wmppk\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.761381 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.847751 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af57ffaa-1d64-474e-a0a3-06aa588351bd","Type":"ContainerStarted","Data":"b867ea610d8579708606b208fff83ab29b07119f7ae8cb9eabb8465a6d38a396"} Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.851169 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cgmdw" event={"ID":"5b567f65-7af2-494a-9846-77428c466361","Type":"ContainerStarted","Data":"bb2bcc47fbe2944b419896f2e544cbc104917f8407f96d430196b05c6f5e98b1"} Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.851217 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cgmdw" event={"ID":"5b567f65-7af2-494a-9846-77428c466361","Type":"ContainerStarted","Data":"503ef7749b1f7ad4de4a9a8cb9e5f27d217406fcdeed0ade9c23ecc791f8f227"} Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.855164 4710 generic.go:334] "Generic (PLEG): container finished" podID="4686c7be-8677-4c5c-801b-dc821197c301" containerID="aaf1c3c96a766c41722fee7af5651119eccf5b11f0063b7499a39d114e8b657c" exitCode=0 Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.855258 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" event={"ID":"4686c7be-8677-4c5c-801b-dc821197c301","Type":"ContainerDied","Data":"aaf1c3c96a766c41722fee7af5651119eccf5b11f0063b7499a39d114e8b657c"} Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.855293 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" event={"ID":"4686c7be-8677-4c5c-801b-dc821197c301","Type":"ContainerStarted","Data":"bdacdd2e1e9210f8ea5c5c386ed7029ae529e34d08798365e88cf291dd3708e2"} Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.857901 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8bc5a36-aa8d-4739-ae2f-63811f1308e1","Type":"ContainerStarted","Data":"4964d347ee7cbc128ca756acd0386db1b93546620e1fbb0b864dd2186ee98b5c"} Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.859251 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0","Type":"ContainerStarted","Data":"9102855add439f0ca76bb5a2cb536b55b34570c11dc55994d0fd3ceecb24500a"} Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.870474 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e414ac49-72cb-4155-b8f5-5ff39076cfd6","Type":"ContainerStarted","Data":"773149e1127709efad38be863ddecd3d3e8b31256995e2d1302d2c9dc651f792"} Nov 28 17:20:51 crc kubenswrapper[4710]: I1128 17:20:51.873468 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-cgmdw" podStartSLOduration=2.873451074 podStartE2EDuration="2.873451074s" podCreationTimestamp="2025-11-28 17:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:51.870888164 +0000 UTC m=+1341.129188219" watchObservedRunningTime="2025-11-28 17:20:51.873451074 +0000 UTC m=+1341.131751119" Nov 28 17:20:52 crc kubenswrapper[4710]: W1128 17:20:52.339553 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda74731a6_5583_442c_bbe9_67f586a1c383.slice/crio-605a5087f9d7fbd9bfd31e0dfb42333a327abf30a1793c8d6c336e492738efbb WatchSource:0}: Error finding container 605a5087f9d7fbd9bfd31e0dfb42333a327abf30a1793c8d6c336e492738efbb: Status 404 returned error can't find the container with id 605a5087f9d7fbd9bfd31e0dfb42333a327abf30a1793c8d6c336e492738efbb Nov 28 17:20:52 crc kubenswrapper[4710]: I1128 17:20:52.349644 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wmppk"] Nov 28 17:20:52 crc kubenswrapper[4710]: I1128 17:20:52.881613 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wmppk" event={"ID":"a74731a6-5583-442c-bbe9-67f586a1c383","Type":"ContainerStarted","Data":"605a5087f9d7fbd9bfd31e0dfb42333a327abf30a1793c8d6c336e492738efbb"} Nov 28 17:20:52 crc kubenswrapper[4710]: I1128 17:20:52.887914 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" event={"ID":"4686c7be-8677-4c5c-801b-dc821197c301","Type":"ContainerStarted","Data":"0a47fc515666d02a6bf6d00177a8219f859bcd57365186f5c18a671ee974b152"} Nov 28 17:20:52 crc kubenswrapper[4710]: I1128 17:20:52.887998 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:20:52 crc kubenswrapper[4710]: I1128 17:20:52.917625 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" podStartSLOduration=2.9176035689999997 podStartE2EDuration="2.917603569s" podCreationTimestamp="2025-11-28 17:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:52.911155888 +0000 UTC m=+1342.169455923" watchObservedRunningTime="2025-11-28 17:20:52.917603569 +0000 UTC m=+1342.175903614" Nov 28 17:20:53 crc kubenswrapper[4710]: I1128 17:20:53.374827 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:20:53 crc kubenswrapper[4710]: I1128 17:20:53.394825 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:53 crc kubenswrapper[4710]: I1128 17:20:53.900088 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wmppk" event={"ID":"a74731a6-5583-442c-bbe9-67f586a1c383","Type":"ContainerStarted","Data":"faa5fc465964f4b1fc77376cec30beb90d93458adb12ba94b524522dfcfd97d1"} Nov 28 17:20:53 crc kubenswrapper[4710]: I1128 17:20:53.931716 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wmppk" podStartSLOduration=2.93169464 podStartE2EDuration="2.93169464s" podCreationTimestamp="2025-11-28 17:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:53.920041007 +0000 UTC m=+1343.178341062" watchObservedRunningTime="2025-11-28 17:20:53.93169464 +0000 UTC m=+1343.189994695" Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.930350 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e414ac49-72cb-4155-b8f5-5ff39076cfd6","Type":"ContainerStarted","Data":"adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f"} Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.930686 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e414ac49-72cb-4155-b8f5-5ff39076cfd6","Type":"ContainerStarted","Data":"2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7"} Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.933546 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af57ffaa-1d64-474e-a0a3-06aa588351bd","Type":"ContainerStarted","Data":"d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251"} Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.935741 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8bc5a36-aa8d-4739-ae2f-63811f1308e1","Type":"ContainerStarted","Data":"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa"} Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.935794 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8bc5a36-aa8d-4739-ae2f-63811f1308e1","Type":"ContainerStarted","Data":"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120"} Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.935917 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerName="nova-metadata-log" containerID="cri-o://a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120" gracePeriod=30 Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.936000 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerName="nova-metadata-metadata" containerID="cri-o://086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa" gracePeriod=30 Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.940511 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0","Type":"ContainerStarted","Data":"5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe"} Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.940672 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe" gracePeriod=30 Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.957356 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.567878402 podStartE2EDuration="6.957337992s" podCreationTimestamp="2025-11-28 17:20:49 +0000 UTC" firstStartedPulling="2025-11-28 17:20:51.381447591 +0000 UTC m=+1340.639747636" lastFinishedPulling="2025-11-28 17:20:54.770907181 +0000 UTC m=+1344.029207226" observedRunningTime="2025-11-28 17:20:55.949211109 +0000 UTC m=+1345.207511154" watchObservedRunningTime="2025-11-28 17:20:55.957337992 +0000 UTC m=+1345.215638037" Nov 28 17:20:55 crc kubenswrapper[4710]: I1128 17:20:55.982728 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.262324799 podStartE2EDuration="6.982708651s" podCreationTimestamp="2025-11-28 17:20:49 +0000 UTC" firstStartedPulling="2025-11-28 17:20:50.98272111 +0000 UTC m=+1340.241021155" lastFinishedPulling="2025-11-28 17:20:54.703104962 +0000 UTC m=+1343.961405007" observedRunningTime="2025-11-28 17:20:55.972139172 +0000 UTC m=+1345.230439237" watchObservedRunningTime="2025-11-28 17:20:55.982708651 +0000 UTC m=+1345.241008696" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.016863 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.294457998 podStartE2EDuration="7.016841413s" podCreationTimestamp="2025-11-28 17:20:49 +0000 UTC" firstStartedPulling="2025-11-28 17:20:50.982193763 +0000 UTC m=+1340.240493808" lastFinishedPulling="2025-11-28 17:20:54.704577178 +0000 UTC m=+1343.962877223" observedRunningTime="2025-11-28 17:20:56.00355473 +0000 UTC m=+1345.261854775" watchObservedRunningTime="2025-11-28 17:20:56.016841413 +0000 UTC m=+1345.275141458" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.039301 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.706329738 podStartE2EDuration="7.039278641s" podCreationTimestamp="2025-11-28 17:20:49 +0000 UTC" firstStartedPulling="2025-11-28 17:20:51.371572284 +0000 UTC m=+1340.629872329" lastFinishedPulling="2025-11-28 17:20:54.704521187 +0000 UTC m=+1343.962821232" observedRunningTime="2025-11-28 17:20:56.021178087 +0000 UTC m=+1345.279478132" watchObservedRunningTime="2025-11-28 17:20:56.039278641 +0000 UTC m=+1345.297578696" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.594099 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.712853 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-logs\") pod \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.713028 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-combined-ca-bundle\") pod \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.713194 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-config-data\") pod \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.713288 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfq2p\" (UniqueName: \"kubernetes.io/projected/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-kube-api-access-xfq2p\") pod \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\" (UID: \"d8bc5a36-aa8d-4739-ae2f-63811f1308e1\") " Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.714202 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-logs" (OuterVolumeSpecName: "logs") pod "d8bc5a36-aa8d-4739-ae2f-63811f1308e1" (UID: "d8bc5a36-aa8d-4739-ae2f-63811f1308e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.738047 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-kube-api-access-xfq2p" (OuterVolumeSpecName: "kube-api-access-xfq2p") pod "d8bc5a36-aa8d-4739-ae2f-63811f1308e1" (UID: "d8bc5a36-aa8d-4739-ae2f-63811f1308e1"). InnerVolumeSpecName "kube-api-access-xfq2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.750655 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8bc5a36-aa8d-4739-ae2f-63811f1308e1" (UID: "d8bc5a36-aa8d-4739-ae2f-63811f1308e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.753188 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-config-data" (OuterVolumeSpecName: "config-data") pod "d8bc5a36-aa8d-4739-ae2f-63811f1308e1" (UID: "d8bc5a36-aa8d-4739-ae2f-63811f1308e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.816099 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.816350 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.816447 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.816530 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfq2p\" (UniqueName: \"kubernetes.io/projected/d8bc5a36-aa8d-4739-ae2f-63811f1308e1-kube-api-access-xfq2p\") on node \"crc\" DevicePath \"\"" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.970318 4710 generic.go:334] "Generic (PLEG): container finished" podID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerID="086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa" exitCode=0 Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.970355 4710 generic.go:334] "Generic (PLEG): container finished" podID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerID="a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120" exitCode=143 Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.970676 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.971499 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8bc5a36-aa8d-4739-ae2f-63811f1308e1","Type":"ContainerDied","Data":"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa"} Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.971529 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8bc5a36-aa8d-4739-ae2f-63811f1308e1","Type":"ContainerDied","Data":"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120"} Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.971539 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d8bc5a36-aa8d-4739-ae2f-63811f1308e1","Type":"ContainerDied","Data":"4964d347ee7cbc128ca756acd0386db1b93546620e1fbb0b864dd2186ee98b5c"} Nov 28 17:20:56 crc kubenswrapper[4710]: I1128 17:20:56.971555 4710 scope.go:117] "RemoveContainer" containerID="086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.022615 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.023047 4710 scope.go:117] "RemoveContainer" containerID="a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.041389 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.053176 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:57 crc kubenswrapper[4710]: E1128 17:20:57.053824 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerName="nova-metadata-log" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.053850 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerName="nova-metadata-log" Nov 28 17:20:57 crc kubenswrapper[4710]: E1128 17:20:57.053888 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerName="nova-metadata-metadata" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.053897 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerName="nova-metadata-metadata" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.054152 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerName="nova-metadata-metadata" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.054184 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" containerName="nova-metadata-log" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.055738 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.058495 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.058698 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.067358 4710 scope.go:117] "RemoveContainer" containerID="086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa" Nov 28 17:20:57 crc kubenswrapper[4710]: E1128 17:20:57.067844 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa\": container with ID starting with 086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa not found: ID does not exist" containerID="086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.067883 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa"} err="failed to get container status \"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa\": rpc error: code = NotFound desc = could not find container \"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa\": container with ID starting with 086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa not found: ID does not exist" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.067906 4710 scope.go:117] "RemoveContainer" containerID="a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.068001 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:57 crc kubenswrapper[4710]: E1128 17:20:57.068299 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120\": container with ID starting with a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120 not found: ID does not exist" containerID="a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.068357 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120"} err="failed to get container status \"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120\": rpc error: code = NotFound desc = could not find container \"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120\": container with ID starting with a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120 not found: ID does not exist" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.068390 4710 scope.go:117] "RemoveContainer" containerID="086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.068876 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa"} err="failed to get container status \"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa\": rpc error: code = NotFound desc = could not find container \"086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa\": container with ID starting with 086ee55ab2305d7aec3330b65a80e6276e695135195b93777d1efb44004663fa not found: ID does not exist" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.068898 4710 scope.go:117] "RemoveContainer" containerID="a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.069191 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120"} err="failed to get container status \"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120\": rpc error: code = NotFound desc = could not find container \"a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120\": container with ID starting with a5810186469a6a183c2d75de722f0dd83160779dbe4aa9b5543ab96aa8bbb120 not found: ID does not exist" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.122495 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.122735 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.122963 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lv95\" (UniqueName: \"kubernetes.io/projected/c83577db-b63f-4908-97a5-48f32d09d157-kube-api-access-9lv95\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.123015 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83577db-b63f-4908-97a5-48f32d09d157-logs\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.123290 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-config-data\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.158804 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8bc5a36-aa8d-4739-ae2f-63811f1308e1" path="/var/lib/kubelet/pods/d8bc5a36-aa8d-4739-ae2f-63811f1308e1/volumes" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.225240 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-config-data\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.225367 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.225409 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.225594 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lv95\" (UniqueName: \"kubernetes.io/projected/c83577db-b63f-4908-97a5-48f32d09d157-kube-api-access-9lv95\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.225632 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83577db-b63f-4908-97a5-48f32d09d157-logs\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.226050 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83577db-b63f-4908-97a5-48f32d09d157-logs\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.232165 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.232274 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.232386 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-config-data\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.244099 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lv95\" (UniqueName: \"kubernetes.io/projected/c83577db-b63f-4908-97a5-48f32d09d157-kube-api-access-9lv95\") pod \"nova-metadata-0\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.386406 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.856497 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:20:57 crc kubenswrapper[4710]: W1128 17:20:57.867083 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc83577db_b63f_4908_97a5_48f32d09d157.slice/crio-9963f4a98959d5b665bf72834580fd55355d825b64addebb5a70f70bd6b3c180 WatchSource:0}: Error finding container 9963f4a98959d5b665bf72834580fd55355d825b64addebb5a70f70bd6b3c180: Status 404 returned error can't find the container with id 9963f4a98959d5b665bf72834580fd55355d825b64addebb5a70f70bd6b3c180 Nov 28 17:20:57 crc kubenswrapper[4710]: I1128 17:20:57.985436 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c83577db-b63f-4908-97a5-48f32d09d157","Type":"ContainerStarted","Data":"9963f4a98959d5b665bf72834580fd55355d825b64addebb5a70f70bd6b3c180"} Nov 28 17:20:58 crc kubenswrapper[4710]: I1128 17:20:58.999360 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c83577db-b63f-4908-97a5-48f32d09d157","Type":"ContainerStarted","Data":"26f66250aaf44adf5f5f5fe311e2098a4072a429de3f1d7d0ea1f47ebf8cbd6f"} Nov 28 17:20:58 crc kubenswrapper[4710]: I1128 17:20:58.999676 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c83577db-b63f-4908-97a5-48f32d09d157","Type":"ContainerStarted","Data":"38d2fed9cfcae99efbff6204056b0b11b6d9c43ea0296dd53966a93fb4fdac75"} Nov 28 17:20:59 crc kubenswrapper[4710]: I1128 17:20:59.002404 4710 generic.go:334] "Generic (PLEG): container finished" podID="5b567f65-7af2-494a-9846-77428c466361" containerID="bb2bcc47fbe2944b419896f2e544cbc104917f8407f96d430196b05c6f5e98b1" exitCode=0 Nov 28 17:20:59 crc kubenswrapper[4710]: I1128 17:20:59.002605 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cgmdw" event={"ID":"5b567f65-7af2-494a-9846-77428c466361","Type":"ContainerDied","Data":"bb2bcc47fbe2944b419896f2e544cbc104917f8407f96d430196b05c6f5e98b1"} Nov 28 17:20:59 crc kubenswrapper[4710]: I1128 17:20:59.023904 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.023879369 podStartE2EDuration="2.023879369s" podCreationTimestamp="2025-11-28 17:20:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:20:59.023448425 +0000 UTC m=+1348.281748520" watchObservedRunningTime="2025-11-28 17:20:59.023879369 +0000 UTC m=+1348.282179414" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.016280 4710 generic.go:334] "Generic (PLEG): container finished" podID="a74731a6-5583-442c-bbe9-67f586a1c383" containerID="faa5fc465964f4b1fc77376cec30beb90d93458adb12ba94b524522dfcfd97d1" exitCode=0 Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.016378 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wmppk" event={"ID":"a74731a6-5583-442c-bbe9-67f586a1c383","Type":"ContainerDied","Data":"faa5fc465964f4b1fc77376cec30beb90d93458adb12ba94b524522dfcfd97d1"} Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.136816 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.136945 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.175142 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.483633 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.506694 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-config-data\") pod \"5b567f65-7af2-494a-9846-77428c466361\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.506826 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjkgz\" (UniqueName: \"kubernetes.io/projected/5b567f65-7af2-494a-9846-77428c466361-kube-api-access-gjkgz\") pod \"5b567f65-7af2-494a-9846-77428c466361\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.506924 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-combined-ca-bundle\") pod \"5b567f65-7af2-494a-9846-77428c466361\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.507085 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-scripts\") pod \"5b567f65-7af2-494a-9846-77428c466361\" (UID: \"5b567f65-7af2-494a-9846-77428c466361\") " Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.518252 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-scripts" (OuterVolumeSpecName: "scripts") pod "5b567f65-7af2-494a-9846-77428c466361" (UID: "5b567f65-7af2-494a-9846-77428c466361"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.535925 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b567f65-7af2-494a-9846-77428c466361-kube-api-access-gjkgz" (OuterVolumeSpecName: "kube-api-access-gjkgz") pod "5b567f65-7af2-494a-9846-77428c466361" (UID: "5b567f65-7af2-494a-9846-77428c466361"). InnerVolumeSpecName "kube-api-access-gjkgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.540179 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-config-data" (OuterVolumeSpecName: "config-data") pod "5b567f65-7af2-494a-9846-77428c466361" (UID: "5b567f65-7af2-494a-9846-77428c466361"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.544965 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5b567f65-7af2-494a-9846-77428c466361" (UID: "5b567f65-7af2-494a-9846-77428c466361"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.588065 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.588118 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.609983 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjkgz\" (UniqueName: \"kubernetes.io/projected/5b567f65-7af2-494a-9846-77428c466361-kube-api-access-gjkgz\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.610194 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.610260 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.610315 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b567f65-7af2-494a-9846-77428c466361-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.616005 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.639733 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.710916 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-kjnkn"] Nov 28 17:21:00 crc kubenswrapper[4710]: I1128 17:21:00.711166 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" podUID="85ac3d96-65a4-4549-a26e-a12e06ae39af" containerName="dnsmasq-dns" containerID="cri-o://e42555f5aa7f0e6dbfef4d03457e5ee72007d120c3f5f0f3c55859cf2844df33" gracePeriod=10 Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.029450 4710 generic.go:334] "Generic (PLEG): container finished" podID="85ac3d96-65a4-4549-a26e-a12e06ae39af" containerID="e42555f5aa7f0e6dbfef4d03457e5ee72007d120c3f5f0f3c55859cf2844df33" exitCode=0 Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.029482 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" event={"ID":"85ac3d96-65a4-4549-a26e-a12e06ae39af","Type":"ContainerDied","Data":"e42555f5aa7f0e6dbfef4d03457e5ee72007d120c3f5f0f3c55859cf2844df33"} Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.031171 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cgmdw" event={"ID":"5b567f65-7af2-494a-9846-77428c466361","Type":"ContainerDied","Data":"503ef7749b1f7ad4de4a9a8cb9e5f27d217406fcdeed0ade9c23ecc791f8f227"} Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.031201 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="503ef7749b1f7ad4de4a9a8cb9e5f27d217406fcdeed0ade9c23ecc791f8f227" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.031299 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cgmdw" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.098736 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.202493 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.226654 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t8qc\" (UniqueName: \"kubernetes.io/projected/85ac3d96-65a4-4549-a26e-a12e06ae39af-kube-api-access-9t8qc\") pod \"85ac3d96-65a4-4549-a26e-a12e06ae39af\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.226713 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-nb\") pod \"85ac3d96-65a4-4549-a26e-a12e06ae39af\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.226734 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-svc\") pod \"85ac3d96-65a4-4549-a26e-a12e06ae39af\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.226829 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-swift-storage-0\") pod \"85ac3d96-65a4-4549-a26e-a12e06ae39af\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.226907 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-sb\") pod \"85ac3d96-65a4-4549-a26e-a12e06ae39af\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.226986 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-config\") pod \"85ac3d96-65a4-4549-a26e-a12e06ae39af\" (UID: \"85ac3d96-65a4-4549-a26e-a12e06ae39af\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.235555 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.235796 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-log" containerID="cri-o://2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7" gracePeriod=30 Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.236069 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-api" containerID="cri-o://adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f" gracePeriod=30 Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.250705 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": EOF" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.250903 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.211:8774/\": EOF" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.257717 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ac3d96-65a4-4549-a26e-a12e06ae39af-kube-api-access-9t8qc" (OuterVolumeSpecName: "kube-api-access-9t8qc") pod "85ac3d96-65a4-4549-a26e-a12e06ae39af" (UID: "85ac3d96-65a4-4549-a26e-a12e06ae39af"). InnerVolumeSpecName "kube-api-access-9t8qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.300312 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.300568 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c83577db-b63f-4908-97a5-48f32d09d157" containerName="nova-metadata-log" containerID="cri-o://38d2fed9cfcae99efbff6204056b0b11b6d9c43ea0296dd53966a93fb4fdac75" gracePeriod=30 Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.300622 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c83577db-b63f-4908-97a5-48f32d09d157" containerName="nova-metadata-metadata" containerID="cri-o://26f66250aaf44adf5f5f5fe311e2098a4072a429de3f1d7d0ea1f47ebf8cbd6f" gracePeriod=30 Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.341522 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t8qc\" (UniqueName: \"kubernetes.io/projected/85ac3d96-65a4-4549-a26e-a12e06ae39af-kube-api-access-9t8qc\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.411176 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-config" (OuterVolumeSpecName: "config") pod "85ac3d96-65a4-4549-a26e-a12e06ae39af" (UID: "85ac3d96-65a4-4549-a26e-a12e06ae39af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.421316 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "85ac3d96-65a4-4549-a26e-a12e06ae39af" (UID: "85ac3d96-65a4-4549-a26e-a12e06ae39af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.428720 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "85ac3d96-65a4-4549-a26e-a12e06ae39af" (UID: "85ac3d96-65a4-4549-a26e-a12e06ae39af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.436163 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "85ac3d96-65a4-4549-a26e-a12e06ae39af" (UID: "85ac3d96-65a4-4549-a26e-a12e06ae39af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.435927 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "85ac3d96-65a4-4549-a26e-a12e06ae39af" (UID: "85ac3d96-65a4-4549-a26e-a12e06ae39af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.443304 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.443339 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.443350 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.443361 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.443369 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ac3d96-65a4-4549-a26e-a12e06ae39af-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.669294 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.680823 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.852382 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-combined-ca-bundle\") pod \"a74731a6-5583-442c-bbe9-67f586a1c383\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.852706 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-scripts\") pod \"a74731a6-5583-442c-bbe9-67f586a1c383\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.852747 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-config-data\") pod \"a74731a6-5583-442c-bbe9-67f586a1c383\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.852870 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8tzs\" (UniqueName: \"kubernetes.io/projected/a74731a6-5583-442c-bbe9-67f586a1c383-kube-api-access-g8tzs\") pod \"a74731a6-5583-442c-bbe9-67f586a1c383\" (UID: \"a74731a6-5583-442c-bbe9-67f586a1c383\") " Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.857268 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-scripts" (OuterVolumeSpecName: "scripts") pod "a74731a6-5583-442c-bbe9-67f586a1c383" (UID: "a74731a6-5583-442c-bbe9-67f586a1c383"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.859976 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a74731a6-5583-442c-bbe9-67f586a1c383-kube-api-access-g8tzs" (OuterVolumeSpecName: "kube-api-access-g8tzs") pod "a74731a6-5583-442c-bbe9-67f586a1c383" (UID: "a74731a6-5583-442c-bbe9-67f586a1c383"). InnerVolumeSpecName "kube-api-access-g8tzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.889278 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a74731a6-5583-442c-bbe9-67f586a1c383" (UID: "a74731a6-5583-442c-bbe9-67f586a1c383"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.894639 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-config-data" (OuterVolumeSpecName: "config-data") pod "a74731a6-5583-442c-bbe9-67f586a1c383" (UID: "a74731a6-5583-442c-bbe9-67f586a1c383"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.954728 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.955049 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.955062 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8tzs\" (UniqueName: \"kubernetes.io/projected/a74731a6-5583-442c-bbe9-67f586a1c383-kube-api-access-g8tzs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:01 crc kubenswrapper[4710]: I1128 17:21:01.955071 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a74731a6-5583-442c-bbe9-67f586a1c383-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.067240 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wmppk" event={"ID":"a74731a6-5583-442c-bbe9-67f586a1c383","Type":"ContainerDied","Data":"605a5087f9d7fbd9bfd31e0dfb42333a327abf30a1793c8d6c336e492738efbb"} Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.067292 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="605a5087f9d7fbd9bfd31e0dfb42333a327abf30a1793c8d6c336e492738efbb" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.067363 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wmppk" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.102527 4710 generic.go:334] "Generic (PLEG): container finished" podID="c83577db-b63f-4908-97a5-48f32d09d157" containerID="26f66250aaf44adf5f5f5fe311e2098a4072a429de3f1d7d0ea1f47ebf8cbd6f" exitCode=0 Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.102677 4710 generic.go:334] "Generic (PLEG): container finished" podID="c83577db-b63f-4908-97a5-48f32d09d157" containerID="38d2fed9cfcae99efbff6204056b0b11b6d9c43ea0296dd53966a93fb4fdac75" exitCode=143 Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.102791 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c83577db-b63f-4908-97a5-48f32d09d157","Type":"ContainerDied","Data":"26f66250aaf44adf5f5f5fe311e2098a4072a429de3f1d7d0ea1f47ebf8cbd6f"} Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.102887 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c83577db-b63f-4908-97a5-48f32d09d157","Type":"ContainerDied","Data":"38d2fed9cfcae99efbff6204056b0b11b6d9c43ea0296dd53966a93fb4fdac75"} Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.129097 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" event={"ID":"85ac3d96-65a4-4549-a26e-a12e06ae39af","Type":"ContainerDied","Data":"142b3570e95dc8cd7200d391953d2f687a2acb8e2d70dc24ae7bf8693e6033e8"} Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.129162 4710 scope.go:117] "RemoveContainer" containerID="e42555f5aa7f0e6dbfef4d03457e5ee72007d120c3f5f0f3c55859cf2844df33" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.129423 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-kjnkn" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.172536 4710 generic.go:334] "Generic (PLEG): container finished" podID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerID="2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7" exitCode=143 Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.172859 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e414ac49-72cb-4155-b8f5-5ff39076cfd6","Type":"ContainerDied","Data":"2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7"} Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.207744 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-kjnkn"] Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.232797 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-kjnkn"] Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.248145 4710 scope.go:117] "RemoveContainer" containerID="ab3cc35d6d6f50efbe7e1c67bf1d9aff0fcc0a5a49dc88e93a6e8f1244227a6e" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.255418 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 17:21:02 crc kubenswrapper[4710]: E1128 17:21:02.255989 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b567f65-7af2-494a-9846-77428c466361" containerName="nova-manage" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.256013 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b567f65-7af2-494a-9846-77428c466361" containerName="nova-manage" Nov 28 17:21:02 crc kubenswrapper[4710]: E1128 17:21:02.256030 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ac3d96-65a4-4549-a26e-a12e06ae39af" containerName="init" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.256040 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ac3d96-65a4-4549-a26e-a12e06ae39af" containerName="init" Nov 28 17:21:02 crc kubenswrapper[4710]: E1128 17:21:02.256075 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ac3d96-65a4-4549-a26e-a12e06ae39af" containerName="dnsmasq-dns" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.256084 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ac3d96-65a4-4549-a26e-a12e06ae39af" containerName="dnsmasq-dns" Nov 28 17:21:02 crc kubenswrapper[4710]: E1128 17:21:02.256124 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a74731a6-5583-442c-bbe9-67f586a1c383" containerName="nova-cell1-conductor-db-sync" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.256133 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="a74731a6-5583-442c-bbe9-67f586a1c383" containerName="nova-cell1-conductor-db-sync" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.256409 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ac3d96-65a4-4549-a26e-a12e06ae39af" containerName="dnsmasq-dns" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.256434 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="a74731a6-5583-442c-bbe9-67f586a1c383" containerName="nova-cell1-conductor-db-sync" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.256468 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b567f65-7af2-494a-9846-77428c466361" containerName="nova-manage" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.257490 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.267386 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.268188 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.366451 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b01ab0-53fd-4ada-897c-3a84952a9fb9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.366496 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b01ab0-53fd-4ada-897c-3a84952a9fb9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.366771 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4g4q\" (UniqueName: \"kubernetes.io/projected/64b01ab0-53fd-4ada-897c-3a84952a9fb9-kube-api-access-d4g4q\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.387084 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.387388 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.469000 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b01ab0-53fd-4ada-897c-3a84952a9fb9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.469055 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b01ab0-53fd-4ada-897c-3a84952a9fb9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.469120 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4g4q\" (UniqueName: \"kubernetes.io/projected/64b01ab0-53fd-4ada-897c-3a84952a9fb9-kube-api-access-d4g4q\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.473359 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64b01ab0-53fd-4ada-897c-3a84952a9fb9-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.482897 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64b01ab0-53fd-4ada-897c-3a84952a9fb9-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.492449 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4g4q\" (UniqueName: \"kubernetes.io/projected/64b01ab0-53fd-4ada-897c-3a84952a9fb9-kube-api-access-d4g4q\") pod \"nova-cell1-conductor-0\" (UID: \"64b01ab0-53fd-4ada-897c-3a84952a9fb9\") " pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.609517 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.624384 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.674812 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-config-data\") pod \"c83577db-b63f-4908-97a5-48f32d09d157\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.674915 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-nova-metadata-tls-certs\") pod \"c83577db-b63f-4908-97a5-48f32d09d157\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.674952 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lv95\" (UniqueName: \"kubernetes.io/projected/c83577db-b63f-4908-97a5-48f32d09d157-kube-api-access-9lv95\") pod \"c83577db-b63f-4908-97a5-48f32d09d157\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.674980 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-combined-ca-bundle\") pod \"c83577db-b63f-4908-97a5-48f32d09d157\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.675015 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83577db-b63f-4908-97a5-48f32d09d157-logs\") pod \"c83577db-b63f-4908-97a5-48f32d09d157\" (UID: \"c83577db-b63f-4908-97a5-48f32d09d157\") " Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.676621 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c83577db-b63f-4908-97a5-48f32d09d157-logs" (OuterVolumeSpecName: "logs") pod "c83577db-b63f-4908-97a5-48f32d09d157" (UID: "c83577db-b63f-4908-97a5-48f32d09d157"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.683206 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c83577db-b63f-4908-97a5-48f32d09d157-kube-api-access-9lv95" (OuterVolumeSpecName: "kube-api-access-9lv95") pod "c83577db-b63f-4908-97a5-48f32d09d157" (UID: "c83577db-b63f-4908-97a5-48f32d09d157"). InnerVolumeSpecName "kube-api-access-9lv95". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.715287 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-config-data" (OuterVolumeSpecName: "config-data") pod "c83577db-b63f-4908-97a5-48f32d09d157" (UID: "c83577db-b63f-4908-97a5-48f32d09d157"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.730214 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c83577db-b63f-4908-97a5-48f32d09d157" (UID: "c83577db-b63f-4908-97a5-48f32d09d157"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.781122 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lv95\" (UniqueName: \"kubernetes.io/projected/c83577db-b63f-4908-97a5-48f32d09d157-kube-api-access-9lv95\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.781154 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.781166 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c83577db-b63f-4908-97a5-48f32d09d157-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.781177 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.793505 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c83577db-b63f-4908-97a5-48f32d09d157" (UID: "c83577db-b63f-4908-97a5-48f32d09d157"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:02 crc kubenswrapper[4710]: I1128 17:21:02.883243 4710 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c83577db-b63f-4908-97a5-48f32d09d157-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.081593 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 28 17:21:03 crc kubenswrapper[4710]: W1128 17:21:03.081929 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64b01ab0_53fd_4ada_897c_3a84952a9fb9.slice/crio-29256410d50fb0c615431f90e8ea1bd158c4b7c42cd28b1014a13184fbff1ceb WatchSource:0}: Error finding container 29256410d50fb0c615431f90e8ea1bd158c4b7c42cd28b1014a13184fbff1ceb: Status 404 returned error can't find the container with id 29256410d50fb0c615431f90e8ea1bd158c4b7c42cd28b1014a13184fbff1ceb Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.161589 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ac3d96-65a4-4549-a26e-a12e06ae39af" path="/var/lib/kubelet/pods/85ac3d96-65a4-4549-a26e-a12e06ae39af/volumes" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.192067 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"64b01ab0-53fd-4ada-897c-3a84952a9fb9","Type":"ContainerStarted","Data":"29256410d50fb0c615431f90e8ea1bd158c4b7c42cd28b1014a13184fbff1ceb"} Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.194806 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c83577db-b63f-4908-97a5-48f32d09d157","Type":"ContainerDied","Data":"9963f4a98959d5b665bf72834580fd55355d825b64addebb5a70f70bd6b3c180"} Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.194873 4710 scope.go:117] "RemoveContainer" containerID="26f66250aaf44adf5f5f5fe311e2098a4072a429de3f1d7d0ea1f47ebf8cbd6f" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.194956 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="af57ffaa-1d64-474e-a0a3-06aa588351bd" containerName="nova-scheduler-scheduler" containerID="cri-o://d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251" gracePeriod=30 Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.195037 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.236191 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.273252 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.276176 4710 scope.go:117] "RemoveContainer" containerID="38d2fed9cfcae99efbff6204056b0b11b6d9c43ea0296dd53966a93fb4fdac75" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.291992 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:03 crc kubenswrapper[4710]: E1128 17:21:03.292508 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83577db-b63f-4908-97a5-48f32d09d157" containerName="nova-metadata-log" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.292528 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83577db-b63f-4908-97a5-48f32d09d157" containerName="nova-metadata-log" Nov 28 17:21:03 crc kubenswrapper[4710]: E1128 17:21:03.292552 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83577db-b63f-4908-97a5-48f32d09d157" containerName="nova-metadata-metadata" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.292559 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83577db-b63f-4908-97a5-48f32d09d157" containerName="nova-metadata-metadata" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.292799 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83577db-b63f-4908-97a5-48f32d09d157" containerName="nova-metadata-metadata" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.292828 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83577db-b63f-4908-97a5-48f32d09d157" containerName="nova-metadata-log" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.294167 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.296741 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.296934 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.312889 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.400644 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-logs\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.401153 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.401373 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-config-data\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.401558 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-894gh\" (UniqueName: \"kubernetes.io/projected/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-kube-api-access-894gh\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.401744 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.505426 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-logs\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.505517 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.505580 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-config-data\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.505621 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-894gh\" (UniqueName: \"kubernetes.io/projected/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-kube-api-access-894gh\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.505663 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.509029 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-logs\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.512570 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.513122 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-config-data\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.514164 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.527533 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-894gh\" (UniqueName: \"kubernetes.io/projected/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-kube-api-access-894gh\") pod \"nova-metadata-0\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " pod="openstack/nova-metadata-0" Nov 28 17:21:03 crc kubenswrapper[4710]: I1128 17:21:03.619189 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:21:04 crc kubenswrapper[4710]: I1128 17:21:04.108610 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:04 crc kubenswrapper[4710]: I1128 17:21:04.207816 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfb833a1-27e7-478e-a7a6-e92d529a6f8b","Type":"ContainerStarted","Data":"e1b7458c1f2437828884e418706ce4252dc2c83ebe42d79af520cddfd8b63f0b"} Nov 28 17:21:04 crc kubenswrapper[4710]: I1128 17:21:04.210827 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"64b01ab0-53fd-4ada-897c-3a84952a9fb9","Type":"ContainerStarted","Data":"2a466b9f3dab541c78a6735199c2ee1a1f2150f5380109bf3a1b7ada59fef810"} Nov 28 17:21:04 crc kubenswrapper[4710]: I1128 17:21:04.211026 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:04 crc kubenswrapper[4710]: I1128 17:21:04.230187 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.230168156 podStartE2EDuration="2.230168156s" podCreationTimestamp="2025-11-28 17:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:04.223321933 +0000 UTC m=+1353.481621988" watchObservedRunningTime="2025-11-28 17:21:04.230168156 +0000 UTC m=+1353.488468201" Nov 28 17:21:05 crc kubenswrapper[4710]: E1128 17:21:05.142244 4710 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:21:05 crc kubenswrapper[4710]: E1128 17:21:05.145395 4710 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:21:05 crc kubenswrapper[4710]: E1128 17:21:05.146467 4710 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:21:05 crc kubenswrapper[4710]: E1128 17:21:05.146509 4710 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="af57ffaa-1d64-474e-a0a3-06aa588351bd" containerName="nova-scheduler-scheduler" Nov 28 17:21:05 crc kubenswrapper[4710]: I1128 17:21:05.157930 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83577db-b63f-4908-97a5-48f32d09d157" path="/var/lib/kubelet/pods/c83577db-b63f-4908-97a5-48f32d09d157/volumes" Nov 28 17:21:05 crc kubenswrapper[4710]: I1128 17:21:05.229149 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfb833a1-27e7-478e-a7a6-e92d529a6f8b","Type":"ContainerStarted","Data":"449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486"} Nov 28 17:21:05 crc kubenswrapper[4710]: I1128 17:21:05.229212 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfb833a1-27e7-478e-a7a6-e92d529a6f8b","Type":"ContainerStarted","Data":"c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7"} Nov 28 17:21:05 crc kubenswrapper[4710]: I1128 17:21:05.255969 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.255919449 podStartE2EDuration="2.255919449s" podCreationTimestamp="2025-11-28 17:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:05.244840215 +0000 UTC m=+1354.503140310" watchObservedRunningTime="2025-11-28 17:21:05.255919449 +0000 UTC m=+1354.514219514" Nov 28 17:21:06 crc kubenswrapper[4710]: I1128 17:21:06.802874 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:21:06 crc kubenswrapper[4710]: I1128 17:21:06.980922 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-combined-ca-bundle\") pod \"af57ffaa-1d64-474e-a0a3-06aa588351bd\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " Nov 28 17:21:06 crc kubenswrapper[4710]: I1128 17:21:06.981081 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-config-data\") pod \"af57ffaa-1d64-474e-a0a3-06aa588351bd\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " Nov 28 17:21:06 crc kubenswrapper[4710]: I1128 17:21:06.981175 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8km8l\" (UniqueName: \"kubernetes.io/projected/af57ffaa-1d64-474e-a0a3-06aa588351bd-kube-api-access-8km8l\") pod \"af57ffaa-1d64-474e-a0a3-06aa588351bd\" (UID: \"af57ffaa-1d64-474e-a0a3-06aa588351bd\") " Nov 28 17:21:06 crc kubenswrapper[4710]: I1128 17:21:06.987499 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af57ffaa-1d64-474e-a0a3-06aa588351bd-kube-api-access-8km8l" (OuterVolumeSpecName: "kube-api-access-8km8l") pod "af57ffaa-1d64-474e-a0a3-06aa588351bd" (UID: "af57ffaa-1d64-474e-a0a3-06aa588351bd"). InnerVolumeSpecName "kube-api-access-8km8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.012430 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af57ffaa-1d64-474e-a0a3-06aa588351bd" (UID: "af57ffaa-1d64-474e-a0a3-06aa588351bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.020519 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-config-data" (OuterVolumeSpecName: "config-data") pod "af57ffaa-1d64-474e-a0a3-06aa588351bd" (UID: "af57ffaa-1d64-474e-a0a3-06aa588351bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.084226 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.084259 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af57ffaa-1d64-474e-a0a3-06aa588351bd-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.084272 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8km8l\" (UniqueName: \"kubernetes.io/projected/af57ffaa-1d64-474e-a0a3-06aa588351bd-kube-api-access-8km8l\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.098369 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.251382 4710 generic.go:334] "Generic (PLEG): container finished" podID="af57ffaa-1d64-474e-a0a3-06aa588351bd" containerID="d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251" exitCode=0 Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.251431 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.251508 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af57ffaa-1d64-474e-a0a3-06aa588351bd","Type":"ContainerDied","Data":"d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251"} Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.251534 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"af57ffaa-1d64-474e-a0a3-06aa588351bd","Type":"ContainerDied","Data":"b867ea610d8579708606b208fff83ab29b07119f7ae8cb9eabb8465a6d38a396"} Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.251570 4710 scope.go:117] "RemoveContainer" containerID="d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.255719 4710 generic.go:334] "Generic (PLEG): container finished" podID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerID="adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f" exitCode=0 Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.255769 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e414ac49-72cb-4155-b8f5-5ff39076cfd6","Type":"ContainerDied","Data":"adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f"} Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.255796 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"e414ac49-72cb-4155-b8f5-5ff39076cfd6","Type":"ContainerDied","Data":"773149e1127709efad38be863ddecd3d3e8b31256995e2d1302d2c9dc651f792"} Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.255902 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.284936 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.286376 4710 scope.go:117] "RemoveContainer" containerID="d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251" Nov 28 17:21:07 crc kubenswrapper[4710]: E1128 17:21:07.286996 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251\": container with ID starting with d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251 not found: ID does not exist" containerID="d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.287037 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251"} err="failed to get container status \"d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251\": rpc error: code = NotFound desc = could not find container \"d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251\": container with ID starting with d78b7fd72ec6aa7a8bc20f04ee578176819df6ed144b57600c6fbff0400d4251 not found: ID does not exist" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.287062 4710 scope.go:117] "RemoveContainer" containerID="adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.294459 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e414ac49-72cb-4155-b8f5-5ff39076cfd6-logs\") pod \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.294553 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-combined-ca-bundle\") pod \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.295274 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e414ac49-72cb-4155-b8f5-5ff39076cfd6-logs" (OuterVolumeSpecName: "logs") pod "e414ac49-72cb-4155-b8f5-5ff39076cfd6" (UID: "e414ac49-72cb-4155-b8f5-5ff39076cfd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.296642 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-config-data\") pod \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.296744 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdjkn\" (UniqueName: \"kubernetes.io/projected/e414ac49-72cb-4155-b8f5-5ff39076cfd6-kube-api-access-hdjkn\") pod \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\" (UID: \"e414ac49-72cb-4155-b8f5-5ff39076cfd6\") " Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.297656 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.297710 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e414ac49-72cb-4155-b8f5-5ff39076cfd6-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.313472 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e414ac49-72cb-4155-b8f5-5ff39076cfd6-kube-api-access-hdjkn" (OuterVolumeSpecName: "kube-api-access-hdjkn") pod "e414ac49-72cb-4155-b8f5-5ff39076cfd6" (UID: "e414ac49-72cb-4155-b8f5-5ff39076cfd6"). InnerVolumeSpecName "kube-api-access-hdjkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.313852 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:07 crc kubenswrapper[4710]: E1128 17:21:07.314459 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af57ffaa-1d64-474e-a0a3-06aa588351bd" containerName="nova-scheduler-scheduler" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.314478 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="af57ffaa-1d64-474e-a0a3-06aa588351bd" containerName="nova-scheduler-scheduler" Nov 28 17:21:07 crc kubenswrapper[4710]: E1128 17:21:07.314499 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-log" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.314508 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-log" Nov 28 17:21:07 crc kubenswrapper[4710]: E1128 17:21:07.314539 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-api" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.314549 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-api" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.314839 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-log" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.314855 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="af57ffaa-1d64-474e-a0a3-06aa588351bd" containerName="nova-scheduler-scheduler" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.314883 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" containerName="nova-api-api" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.315985 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.322220 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.326913 4710 scope.go:117] "RemoveContainer" containerID="2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.330412 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e414ac49-72cb-4155-b8f5-5ff39076cfd6" (UID: "e414ac49-72cb-4155-b8f5-5ff39076cfd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.336183 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-config-data" (OuterVolumeSpecName: "config-data") pod "e414ac49-72cb-4155-b8f5-5ff39076cfd6" (UID: "e414ac49-72cb-4155-b8f5-5ff39076cfd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.339026 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.347453 4710 scope.go:117] "RemoveContainer" containerID="adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f" Nov 28 17:21:07 crc kubenswrapper[4710]: E1128 17:21:07.347925 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f\": container with ID starting with adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f not found: ID does not exist" containerID="adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.347969 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f"} err="failed to get container status \"adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f\": rpc error: code = NotFound desc = could not find container \"adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f\": container with ID starting with adfb63822670ff02d6ac559becb2ab728e11ff477b7d53535a570adc311b833f not found: ID does not exist" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.347990 4710 scope.go:117] "RemoveContainer" containerID="2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7" Nov 28 17:21:07 crc kubenswrapper[4710]: E1128 17:21:07.348289 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7\": container with ID starting with 2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7 not found: ID does not exist" containerID="2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.348317 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7"} err="failed to get container status \"2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7\": rpc error: code = NotFound desc = could not find container \"2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7\": container with ID starting with 2d3314bb44065ae2ba99cc1e5f48d14eceac32fdcc53a1ba9faade30fcd390f7 not found: ID does not exist" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.399132 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfd29\" (UniqueName: \"kubernetes.io/projected/b7b77a7d-87ae-49de-bd1e-cabc067b1966-kube-api-access-hfd29\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.399185 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.399242 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-config-data\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.399529 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdjkn\" (UniqueName: \"kubernetes.io/projected/e414ac49-72cb-4155-b8f5-5ff39076cfd6-kube-api-access-hdjkn\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.399564 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.399575 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e414ac49-72cb-4155-b8f5-5ff39076cfd6-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.501206 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfd29\" (UniqueName: \"kubernetes.io/projected/b7b77a7d-87ae-49de-bd1e-cabc067b1966-kube-api-access-hfd29\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.501279 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.501345 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-config-data\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.504936 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.505040 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-config-data\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.518238 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfd29\" (UniqueName: \"kubernetes.io/projected/b7b77a7d-87ae-49de-bd1e-cabc067b1966-kube-api-access-hfd29\") pod \"nova-scheduler-0\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.599451 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.615023 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.625018 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.627778 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.635087 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.640801 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.657009 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.705530 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-logs\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.705603 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2z5b\" (UniqueName: \"kubernetes.io/projected/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-kube-api-access-f2z5b\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.705650 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-config-data\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.705686 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.807010 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-logs\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.807085 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2z5b\" (UniqueName: \"kubernetes.io/projected/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-kube-api-access-f2z5b\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.807142 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-config-data\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.807186 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.808064 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-logs\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.813909 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.814389 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-config-data\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.822653 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2z5b\" (UniqueName: \"kubernetes.io/projected/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-kube-api-access-f2z5b\") pod \"nova-api-0\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " pod="openstack/nova-api-0" Nov 28 17:21:07 crc kubenswrapper[4710]: I1128 17:21:07.954589 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:08 crc kubenswrapper[4710]: I1128 17:21:08.095610 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:08 crc kubenswrapper[4710]: I1128 17:21:08.269450 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7b77a7d-87ae-49de-bd1e-cabc067b1966","Type":"ContainerStarted","Data":"630b01d71499d98a93031ac2787f877ccf477a06ce97fd405d767f522e7b9921"} Nov 28 17:21:08 crc kubenswrapper[4710]: I1128 17:21:08.395881 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:08 crc kubenswrapper[4710]: I1128 17:21:08.620288 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:21:08 crc kubenswrapper[4710]: I1128 17:21:08.620425 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:21:09 crc kubenswrapper[4710]: I1128 17:21:09.157153 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af57ffaa-1d64-474e-a0a3-06aa588351bd" path="/var/lib/kubelet/pods/af57ffaa-1d64-474e-a0a3-06aa588351bd/volumes" Nov 28 17:21:09 crc kubenswrapper[4710]: I1128 17:21:09.158320 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e414ac49-72cb-4155-b8f5-5ff39076cfd6" path="/var/lib/kubelet/pods/e414ac49-72cb-4155-b8f5-5ff39076cfd6/volumes" Nov 28 17:21:09 crc kubenswrapper[4710]: I1128 17:21:09.293263 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7b77a7d-87ae-49de-bd1e-cabc067b1966","Type":"ContainerStarted","Data":"f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb"} Nov 28 17:21:09 crc kubenswrapper[4710]: I1128 17:21:09.300567 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b98609a-8c9d-4802-b05c-90b7c7bd9fef","Type":"ContainerStarted","Data":"9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee"} Nov 28 17:21:09 crc kubenswrapper[4710]: I1128 17:21:09.300610 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b98609a-8c9d-4802-b05c-90b7c7bd9fef","Type":"ContainerStarted","Data":"204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724"} Nov 28 17:21:09 crc kubenswrapper[4710]: I1128 17:21:09.300623 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b98609a-8c9d-4802-b05c-90b7c7bd9fef","Type":"ContainerStarted","Data":"6471a1ead55130eb994e954a895eadcb49ea516299aa6bdc6211592f7ef89d67"} Nov 28 17:21:09 crc kubenswrapper[4710]: I1128 17:21:09.312569 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.312545119 podStartE2EDuration="2.312545119s" podCreationTimestamp="2025-11-28 17:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:09.308338928 +0000 UTC m=+1358.566639023" watchObservedRunningTime="2025-11-28 17:21:09.312545119 +0000 UTC m=+1358.570845184" Nov 28 17:21:09 crc kubenswrapper[4710]: I1128 17:21:09.331145 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.331124077 podStartE2EDuration="2.331124077s" podCreationTimestamp="2025-11-28 17:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:09.324866022 +0000 UTC m=+1358.583166067" watchObservedRunningTime="2025-11-28 17:21:09.331124077 +0000 UTC m=+1358.589424122" Nov 28 17:21:12 crc kubenswrapper[4710]: I1128 17:21:12.641805 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 17:21:12 crc kubenswrapper[4710]: I1128 17:21:12.649072 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 28 17:21:13 crc kubenswrapper[4710]: I1128 17:21:13.170422 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 17:21:13 crc kubenswrapper[4710]: I1128 17:21:13.344334 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:21:13 crc kubenswrapper[4710]: I1128 17:21:13.345328 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:21:13 crc kubenswrapper[4710]: I1128 17:21:13.620089 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 17:21:13 crc kubenswrapper[4710]: I1128 17:21:13.620147 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 17:21:14 crc kubenswrapper[4710]: I1128 17:21:14.631914 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:21:14 crc kubenswrapper[4710]: I1128 17:21:14.631979 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:21:16 crc kubenswrapper[4710]: I1128 17:21:16.849524 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:21:16 crc kubenswrapper[4710]: I1128 17:21:16.850338 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="02a0cc30-b7bd-4e67-9aad-a4a895909384" containerName="kube-state-metrics" containerID="cri-o://82e30b277816c509cbf159b8d022dcdb19ca69df8dd65c6a2d4237d41a279506" gracePeriod=30 Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.419246 4710 generic.go:334] "Generic (PLEG): container finished" podID="02a0cc30-b7bd-4e67-9aad-a4a895909384" containerID="82e30b277816c509cbf159b8d022dcdb19ca69df8dd65c6a2d4237d41a279506" exitCode=2 Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.419463 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02a0cc30-b7bd-4e67-9aad-a4a895909384","Type":"ContainerDied","Data":"82e30b277816c509cbf159b8d022dcdb19ca69df8dd65c6a2d4237d41a279506"} Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.419490 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"02a0cc30-b7bd-4e67-9aad-a4a895909384","Type":"ContainerDied","Data":"09fe0892bd008f9b1384248a98f7dbb69c11d5b76c59886a002f015ee83ee0c9"} Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.419502 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09fe0892bd008f9b1384248a98f7dbb69c11d5b76c59886a002f015ee83ee0c9" Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.466124 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.630235 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spg4j\" (UniqueName: \"kubernetes.io/projected/02a0cc30-b7bd-4e67-9aad-a4a895909384-kube-api-access-spg4j\") pod \"02a0cc30-b7bd-4e67-9aad-a4a895909384\" (UID: \"02a0cc30-b7bd-4e67-9aad-a4a895909384\") " Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.635976 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02a0cc30-b7bd-4e67-9aad-a4a895909384-kube-api-access-spg4j" (OuterVolumeSpecName: "kube-api-access-spg4j") pod "02a0cc30-b7bd-4e67-9aad-a4a895909384" (UID: "02a0cc30-b7bd-4e67-9aad-a4a895909384"). InnerVolumeSpecName "kube-api-access-spg4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.642516 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.676017 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.733035 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spg4j\" (UniqueName: \"kubernetes.io/projected/02a0cc30-b7bd-4e67-9aad-a4a895909384-kube-api-access-spg4j\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.955000 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:21:17 crc kubenswrapper[4710]: I1128 17:21:17.955096 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.428527 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.465143 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.479405 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.502823 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.530988 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:21:18 crc kubenswrapper[4710]: E1128 17:21:18.531778 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02a0cc30-b7bd-4e67-9aad-a4a895909384" containerName="kube-state-metrics" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.531860 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="02a0cc30-b7bd-4e67-9aad-a4a895909384" containerName="kube-state-metrics" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.532173 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="02a0cc30-b7bd-4e67-9aad-a4a895909384" containerName="kube-state-metrics" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.533449 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.536237 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.536982 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.560166 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.655293 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.655360 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.655438 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.655574 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpm2t\" (UniqueName: \"kubernetes.io/projected/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-api-access-jpm2t\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.757177 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.757265 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpm2t\" (UniqueName: \"kubernetes.io/projected/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-api-access-jpm2t\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.757367 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.757398 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.764221 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.765958 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.766382 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.786520 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpm2t\" (UniqueName: \"kubernetes.io/projected/9439f76f-1d85-4e4a-86a6-0b86e169712b-kube-api-access-jpm2t\") pod \"kube-state-metrics-0\" (UID: \"9439f76f-1d85-4e4a-86a6-0b86e169712b\") " pod="openstack/kube-state-metrics-0" Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.840265 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.840597 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="proxy-httpd" containerID="cri-o://b9041b7e9256b531f54278751c8f9538fec842491115d265279d2c4cddce392a" gracePeriod=30 Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.840631 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="sg-core" containerID="cri-o://aecbc9a9c50fa805bb468b566c765f95bc1f331793e3ff45291b0088eeb3b4ab" gracePeriod=30 Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.840734 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="ceilometer-central-agent" containerID="cri-o://533973b642b1f014a940156e6c1b4aa3c4dec6fabb0e3757132c20bab98e2d60" gracePeriod=30 Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.840742 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="ceilometer-notification-agent" containerID="cri-o://2947b1c1150b7817ecbf311e4826b7ab4f151671eb5d9880dc20b6be97814900" gracePeriod=30 Nov 28 17:21:18 crc kubenswrapper[4710]: I1128 17:21:18.852423 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.039244 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.039506 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.219:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.203965 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02a0cc30-b7bd-4e67-9aad-a4a895909384" path="/var/lib/kubelet/pods/02a0cc30-b7bd-4e67-9aad-a4a895909384/volumes" Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.364501 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.443513 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9439f76f-1d85-4e4a-86a6-0b86e169712b","Type":"ContainerStarted","Data":"5f10e603885e8195cbf70d3763be74ef59d96ac1be8d3134eac6f377dd7a8f56"} Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.448276 4710 generic.go:334] "Generic (PLEG): container finished" podID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerID="b9041b7e9256b531f54278751c8f9538fec842491115d265279d2c4cddce392a" exitCode=0 Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.448320 4710 generic.go:334] "Generic (PLEG): container finished" podID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerID="aecbc9a9c50fa805bb468b566c765f95bc1f331793e3ff45291b0088eeb3b4ab" exitCode=2 Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.448330 4710 generic.go:334] "Generic (PLEG): container finished" podID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerID="533973b642b1f014a940156e6c1b4aa3c4dec6fabb0e3757132c20bab98e2d60" exitCode=0 Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.448349 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerDied","Data":"b9041b7e9256b531f54278751c8f9538fec842491115d265279d2c4cddce392a"} Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.448390 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerDied","Data":"aecbc9a9c50fa805bb468b566c765f95bc1f331793e3ff45291b0088eeb3b4ab"} Nov 28 17:21:19 crc kubenswrapper[4710]: I1128 17:21:19.448403 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerDied","Data":"533973b642b1f014a940156e6c1b4aa3c4dec6fabb0e3757132c20bab98e2d60"} Nov 28 17:21:20 crc kubenswrapper[4710]: I1128 17:21:20.459987 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"9439f76f-1d85-4e4a-86a6-0b86e169712b","Type":"ContainerStarted","Data":"aa537f51822dcb3478ec84043ba615ee3a0e92fbe092680c157cfa36f0db2089"} Nov 28 17:21:20 crc kubenswrapper[4710]: I1128 17:21:20.460520 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 28 17:21:20 crc kubenswrapper[4710]: I1128 17:21:20.476697 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.131587565 podStartE2EDuration="2.476675659s" podCreationTimestamp="2025-11-28 17:21:18 +0000 UTC" firstStartedPulling="2025-11-28 17:21:19.37904668 +0000 UTC m=+1368.637346725" lastFinishedPulling="2025-11-28 17:21:19.724134774 +0000 UTC m=+1368.982434819" observedRunningTime="2025-11-28 17:21:20.475933476 +0000 UTC m=+1369.734233541" watchObservedRunningTime="2025-11-28 17:21:20.476675659 +0000 UTC m=+1369.734975704" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.490790 4710 generic.go:334] "Generic (PLEG): container finished" podID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerID="2947b1c1150b7817ecbf311e4826b7ab4f151671eb5d9880dc20b6be97814900" exitCode=0 Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.490866 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerDied","Data":"2947b1c1150b7817ecbf311e4826b7ab4f151671eb5d9880dc20b6be97814900"} Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.628128 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.629209 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.636050 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.646983 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.790611 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-scripts\") pod \"04356f71-ec3f-4393-8c94-bf010eeea8ef\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.790710 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-config-data\") pod \"04356f71-ec3f-4393-8c94-bf010eeea8ef\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.790785 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-combined-ca-bundle\") pod \"04356f71-ec3f-4393-8c94-bf010eeea8ef\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.790894 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fpsx\" (UniqueName: \"kubernetes.io/projected/04356f71-ec3f-4393-8c94-bf010eeea8ef-kube-api-access-2fpsx\") pod \"04356f71-ec3f-4393-8c94-bf010eeea8ef\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.790919 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-log-httpd\") pod \"04356f71-ec3f-4393-8c94-bf010eeea8ef\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.791009 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-run-httpd\") pod \"04356f71-ec3f-4393-8c94-bf010eeea8ef\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.791061 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-sg-core-conf-yaml\") pod \"04356f71-ec3f-4393-8c94-bf010eeea8ef\" (UID: \"04356f71-ec3f-4393-8c94-bf010eeea8ef\") " Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.792826 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04356f71-ec3f-4393-8c94-bf010eeea8ef" (UID: "04356f71-ec3f-4393-8c94-bf010eeea8ef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.793165 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04356f71-ec3f-4393-8c94-bf010eeea8ef" (UID: "04356f71-ec3f-4393-8c94-bf010eeea8ef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.801022 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-scripts" (OuterVolumeSpecName: "scripts") pod "04356f71-ec3f-4393-8c94-bf010eeea8ef" (UID: "04356f71-ec3f-4393-8c94-bf010eeea8ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.803638 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04356f71-ec3f-4393-8c94-bf010eeea8ef-kube-api-access-2fpsx" (OuterVolumeSpecName: "kube-api-access-2fpsx") pod "04356f71-ec3f-4393-8c94-bf010eeea8ef" (UID: "04356f71-ec3f-4393-8c94-bf010eeea8ef"). InnerVolumeSpecName "kube-api-access-2fpsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.828009 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04356f71-ec3f-4393-8c94-bf010eeea8ef" (UID: "04356f71-ec3f-4393-8c94-bf010eeea8ef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.893879 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fpsx\" (UniqueName: \"kubernetes.io/projected/04356f71-ec3f-4393-8c94-bf010eeea8ef-kube-api-access-2fpsx\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.893911 4710 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.893923 4710 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04356f71-ec3f-4393-8c94-bf010eeea8ef-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.893933 4710 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.893943 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.908120 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04356f71-ec3f-4393-8c94-bf010eeea8ef" (UID: "04356f71-ec3f-4393-8c94-bf010eeea8ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.913850 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-config-data" (OuterVolumeSpecName: "config-data") pod "04356f71-ec3f-4393-8c94-bf010eeea8ef" (UID: "04356f71-ec3f-4393-8c94-bf010eeea8ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.995483 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:23 crc kubenswrapper[4710]: I1128 17:21:23.995521 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04356f71-ec3f-4393-8c94-bf010eeea8ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.533862 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.535216 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04356f71-ec3f-4393-8c94-bf010eeea8ef","Type":"ContainerDied","Data":"4200e2d10556d6f78806791f1da56d5bc29a06ef04b72337d43752488dfef16d"} Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.537798 4710 scope.go:117] "RemoveContainer" containerID="b9041b7e9256b531f54278751c8f9538fec842491115d265279d2c4cddce392a" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.539012 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.576970 4710 scope.go:117] "RemoveContainer" containerID="aecbc9a9c50fa805bb468b566c765f95bc1f331793e3ff45291b0088eeb3b4ab" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.649076 4710 scope.go:117] "RemoveContainer" containerID="2947b1c1150b7817ecbf311e4826b7ab4f151671eb5d9880dc20b6be97814900" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.657630 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.687166 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.694880 4710 scope.go:117] "RemoveContainer" containerID="533973b642b1f014a940156e6c1b4aa3c4dec6fabb0e3757132c20bab98e2d60" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.698816 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:24 crc kubenswrapper[4710]: E1128 17:21:24.699257 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="ceilometer-central-agent" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.699269 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="ceilometer-central-agent" Nov 28 17:21:24 crc kubenswrapper[4710]: E1128 17:21:24.699294 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="proxy-httpd" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.699299 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="proxy-httpd" Nov 28 17:21:24 crc kubenswrapper[4710]: E1128 17:21:24.699311 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="ceilometer-notification-agent" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.699318 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="ceilometer-notification-agent" Nov 28 17:21:24 crc kubenswrapper[4710]: E1128 17:21:24.699336 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="sg-core" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.699343 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="sg-core" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.699552 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="sg-core" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.699575 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="proxy-httpd" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.699583 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="ceilometer-notification-agent" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.699593 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" containerName="ceilometer-central-agent" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.703207 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.706953 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.707114 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.710235 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.719515 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.859772 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-run-httpd\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.859870 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.859901 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.859947 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.859974 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-log-httpd\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.859990 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-config-data\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.860032 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-scripts\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.860054 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh2qj\" (UniqueName: \"kubernetes.io/projected/ee089353-0557-43c7-b7d7-42142c146da9-kube-api-access-vh2qj\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.961713 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.961886 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.961928 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-log-httpd\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.961958 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-config-data\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.962019 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-scripts\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.962047 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh2qj\" (UniqueName: \"kubernetes.io/projected/ee089353-0557-43c7-b7d7-42142c146da9-kube-api-access-vh2qj\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.962121 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-run-httpd\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.962185 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.963349 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-run-httpd\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.963512 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-log-httpd\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.967010 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.967676 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-config-data\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.967874 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.969079 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-scripts\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.969082 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:24 crc kubenswrapper[4710]: I1128 17:21:24.981245 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh2qj\" (UniqueName: \"kubernetes.io/projected/ee089353-0557-43c7-b7d7-42142c146da9-kube-api-access-vh2qj\") pod \"ceilometer-0\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " pod="openstack/ceilometer-0" Nov 28 17:21:25 crc kubenswrapper[4710]: I1128 17:21:25.075600 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:25 crc kubenswrapper[4710]: I1128 17:21:25.161591 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04356f71-ec3f-4393-8c94-bf010eeea8ef" path="/var/lib/kubelet/pods/04356f71-ec3f-4393-8c94-bf010eeea8ef/volumes" Nov 28 17:21:25 crc kubenswrapper[4710]: I1128 17:21:25.522367 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:25 crc kubenswrapper[4710]: W1128 17:21:25.545950 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee089353_0557_43c7_b7d7_42142c146da9.slice/crio-ef56ad001c64e24bb5f48c60a37c39fec3ca072e2b962838baa0e0398fb0afe0 WatchSource:0}: Error finding container ef56ad001c64e24bb5f48c60a37c39fec3ca072e2b962838baa0e0398fb0afe0: Status 404 returned error can't find the container with id ef56ad001c64e24bb5f48c60a37c39fec3ca072e2b962838baa0e0398fb0afe0 Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.355396 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.491725 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-config-data\") pod \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.491816 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-combined-ca-bundle\") pod \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.491915 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2m5p\" (UniqueName: \"kubernetes.io/projected/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-kube-api-access-z2m5p\") pod \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\" (UID: \"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0\") " Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.502862 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-kube-api-access-z2m5p" (OuterVolumeSpecName: "kube-api-access-z2m5p") pod "f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" (UID: "f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0"). InnerVolumeSpecName "kube-api-access-z2m5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.522947 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-config-data" (OuterVolumeSpecName: "config-data") pod "f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" (UID: "f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.526253 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" (UID: "f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.568534 4710 generic.go:334] "Generic (PLEG): container finished" podID="f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" containerID="5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe" exitCode=137 Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.568598 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0","Type":"ContainerDied","Data":"5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe"} Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.568624 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0","Type":"ContainerDied","Data":"9102855add439f0ca76bb5a2cb536b55b34570c11dc55994d0fd3ceecb24500a"} Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.568639 4710 scope.go:117] "RemoveContainer" containerID="5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.568720 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.580513 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerStarted","Data":"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd"} Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.580550 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerStarted","Data":"ef56ad001c64e24bb5f48c60a37c39fec3ca072e2b962838baa0e0398fb0afe0"} Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.594782 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.594805 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.594815 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2m5p\" (UniqueName: \"kubernetes.io/projected/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0-kube-api-access-z2m5p\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.607293 4710 scope.go:117] "RemoveContainer" containerID="5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe" Nov 28 17:21:26 crc kubenswrapper[4710]: E1128 17:21:26.607830 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe\": container with ID starting with 5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe not found: ID does not exist" containerID="5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.607880 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe"} err="failed to get container status \"5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe\": rpc error: code = NotFound desc = could not find container \"5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe\": container with ID starting with 5d251c07a3b36c41fcf2e8154721f5fce6e3c81952fe4f30cbb19daea516a4fe not found: ID does not exist" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.618917 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.634216 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.650591 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:21:26 crc kubenswrapper[4710]: E1128 17:21:26.651204 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.651229 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.651549 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" containerName="nova-cell1-novncproxy-novncproxy" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.652539 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.659368 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.659700 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.671170 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.698098 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.700187 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.701732 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.701882 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.702113 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr2cm\" (UniqueName: \"kubernetes.io/projected/64d64187-1205-4085-8084-39e9b4c2efec-kube-api-access-pr2cm\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.702142 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.804118 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr2cm\" (UniqueName: \"kubernetes.io/projected/64d64187-1205-4085-8084-39e9b4c2efec-kube-api-access-pr2cm\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.804190 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.804246 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.804358 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.804482 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.808691 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.808824 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.809535 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.818865 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/64d64187-1205-4085-8084-39e9b4c2efec-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.838268 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr2cm\" (UniqueName: \"kubernetes.io/projected/64d64187-1205-4085-8084-39e9b4c2efec-kube-api-access-pr2cm\") pod \"nova-cell1-novncproxy-0\" (UID: \"64d64187-1205-4085-8084-39e9b4c2efec\") " pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:26 crc kubenswrapper[4710]: I1128 17:21:26.989741 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:27 crc kubenswrapper[4710]: I1128 17:21:27.158222 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0" path="/var/lib/kubelet/pods/f6efc34d-ed0c-4e97-bf22-2e8b6bbb53b0/volumes" Nov 28 17:21:27 crc kubenswrapper[4710]: I1128 17:21:27.464729 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 28 17:21:27 crc kubenswrapper[4710]: I1128 17:21:27.596901 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64d64187-1205-4085-8084-39e9b4c2efec","Type":"ContainerStarted","Data":"7e3559cbdc4205af5de28ee7bd84ff3414bc2a5be5e0614e5fb8b6a9e78c8980"} Nov 28 17:21:27 crc kubenswrapper[4710]: I1128 17:21:27.960885 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 17:21:27 crc kubenswrapper[4710]: I1128 17:21:27.961319 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 17:21:27 crc kubenswrapper[4710]: I1128 17:21:27.963198 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 17:21:27 crc kubenswrapper[4710]: I1128 17:21:27.964088 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.617635 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerStarted","Data":"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa"} Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.621038 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"64d64187-1205-4085-8084-39e9b4c2efec","Type":"ContainerStarted","Data":"bc8155991fc9ea7320c0559affaf6e22efb38c3a8ab3fc9908e257a4a4058ddb"} Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.621670 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.625397 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.648518 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.648496792 podStartE2EDuration="2.648496792s" podCreationTimestamp="2025-11-28 17:21:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:28.635695624 +0000 UTC m=+1377.893995669" watchObservedRunningTime="2025-11-28 17:21:28.648496792 +0000 UTC m=+1377.906796837" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.827867 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-zrwtj"] Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.829882 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.865400 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-zrwtj"] Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.924616 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.956215 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.956330 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.956419 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-config\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.956496 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5w9\" (UniqueName: \"kubernetes.io/projected/57fb07c0-57b1-4950-b522-1a4b7462a841-kube-api-access-jl5w9\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.956530 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:28 crc kubenswrapper[4710]: I1128 17:21:28.956590 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.058105 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.058231 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.058263 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.058334 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-config\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.058399 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5w9\" (UniqueName: \"kubernetes.io/projected/57fb07c0-57b1-4950-b522-1a4b7462a841-kube-api-access-jl5w9\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.058425 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.059021 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.059512 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.059805 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-config\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.060382 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.062334 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.081681 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5w9\" (UniqueName: \"kubernetes.io/projected/57fb07c0-57b1-4950-b522-1a4b7462a841-kube-api-access-jl5w9\") pod \"dnsmasq-dns-cd5cbd7b9-zrwtj\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.148683 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.643227 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerStarted","Data":"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57"} Nov 28 17:21:29 crc kubenswrapper[4710]: I1128 17:21:29.747867 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-zrwtj"] Nov 28 17:21:30 crc kubenswrapper[4710]: I1128 17:21:30.653569 4710 generic.go:334] "Generic (PLEG): container finished" podID="57fb07c0-57b1-4950-b522-1a4b7462a841" containerID="2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039" exitCode=0 Nov 28 17:21:30 crc kubenswrapper[4710]: I1128 17:21:30.655215 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" event={"ID":"57fb07c0-57b1-4950-b522-1a4b7462a841","Type":"ContainerDied","Data":"2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039"} Nov 28 17:21:30 crc kubenswrapper[4710]: I1128 17:21:30.655247 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" event={"ID":"57fb07c0-57b1-4950-b522-1a4b7462a841","Type":"ContainerStarted","Data":"eff1c5709d8972ff948fee19d4aadf74f86d1d517b2edbc4894885cf2ef9cde6"} Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.299190 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.667114 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerStarted","Data":"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955"} Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.669172 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.672281 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" event={"ID":"57fb07c0-57b1-4950-b522-1a4b7462a841","Type":"ContainerStarted","Data":"8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d"} Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.672612 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-api" containerID="cri-o://9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee" gracePeriod=30 Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.672723 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-log" containerID="cri-o://204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724" gracePeriod=30 Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.699811 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.872179063 podStartE2EDuration="7.699792684s" podCreationTimestamp="2025-11-28 17:21:24 +0000 UTC" firstStartedPulling="2025-11-28 17:21:25.549554317 +0000 UTC m=+1374.807854362" lastFinishedPulling="2025-11-28 17:21:30.377167938 +0000 UTC m=+1379.635467983" observedRunningTime="2025-11-28 17:21:31.691284319 +0000 UTC m=+1380.949584374" watchObservedRunningTime="2025-11-28 17:21:31.699792684 +0000 UTC m=+1380.958092729" Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.723251 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" podStartSLOduration=3.723230183 podStartE2EDuration="3.723230183s" podCreationTimestamp="2025-11-28 17:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:31.721651033 +0000 UTC m=+1380.979951098" watchObservedRunningTime="2025-11-28 17:21:31.723230183 +0000 UTC m=+1380.981530228" Nov 28 17:21:31 crc kubenswrapper[4710]: I1128 17:21:31.989881 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:32 crc kubenswrapper[4710]: I1128 17:21:32.208315 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:32 crc kubenswrapper[4710]: I1128 17:21:32.684183 4710 generic.go:334] "Generic (PLEG): container finished" podID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerID="204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724" exitCode=143 Nov 28 17:21:32 crc kubenswrapper[4710]: I1128 17:21:32.686231 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b98609a-8c9d-4802-b05c-90b7c7bd9fef","Type":"ContainerDied","Data":"204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724"} Nov 28 17:21:32 crc kubenswrapper[4710]: I1128 17:21:32.686973 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:33 crc kubenswrapper[4710]: I1128 17:21:33.695731 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="ceilometer-central-agent" containerID="cri-o://9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd" gracePeriod=30 Nov 28 17:21:33 crc kubenswrapper[4710]: I1128 17:21:33.696218 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="proxy-httpd" containerID="cri-o://f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955" gracePeriod=30 Nov 28 17:21:33 crc kubenswrapper[4710]: I1128 17:21:33.696268 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="sg-core" containerID="cri-o://31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57" gracePeriod=30 Nov 28 17:21:33 crc kubenswrapper[4710]: I1128 17:21:33.696281 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="ceilometer-notification-agent" containerID="cri-o://ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa" gracePeriod=30 Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.435588 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.610901 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh2qj\" (UniqueName: \"kubernetes.io/projected/ee089353-0557-43c7-b7d7-42142c146da9-kube-api-access-vh2qj\") pod \"ee089353-0557-43c7-b7d7-42142c146da9\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.610989 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-combined-ca-bundle\") pod \"ee089353-0557-43c7-b7d7-42142c146da9\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.611115 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-config-data\") pod \"ee089353-0557-43c7-b7d7-42142c146da9\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.611170 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-scripts\") pod \"ee089353-0557-43c7-b7d7-42142c146da9\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.611247 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-run-httpd\") pod \"ee089353-0557-43c7-b7d7-42142c146da9\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.611290 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-log-httpd\") pod \"ee089353-0557-43c7-b7d7-42142c146da9\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.611349 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-sg-core-conf-yaml\") pod \"ee089353-0557-43c7-b7d7-42142c146da9\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.611372 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-ceilometer-tls-certs\") pod \"ee089353-0557-43c7-b7d7-42142c146da9\" (UID: \"ee089353-0557-43c7-b7d7-42142c146da9\") " Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.611866 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ee089353-0557-43c7-b7d7-42142c146da9" (UID: "ee089353-0557-43c7-b7d7-42142c146da9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.611997 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ee089353-0557-43c7-b7d7-42142c146da9" (UID: "ee089353-0557-43c7-b7d7-42142c146da9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.616989 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-scripts" (OuterVolumeSpecName: "scripts") pod "ee089353-0557-43c7-b7d7-42142c146da9" (UID: "ee089353-0557-43c7-b7d7-42142c146da9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.621021 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee089353-0557-43c7-b7d7-42142c146da9-kube-api-access-vh2qj" (OuterVolumeSpecName: "kube-api-access-vh2qj") pod "ee089353-0557-43c7-b7d7-42142c146da9" (UID: "ee089353-0557-43c7-b7d7-42142c146da9"). InnerVolumeSpecName "kube-api-access-vh2qj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.656035 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ee089353-0557-43c7-b7d7-42142c146da9" (UID: "ee089353-0557-43c7-b7d7-42142c146da9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.673672 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ee089353-0557-43c7-b7d7-42142c146da9" (UID: "ee089353-0557-43c7-b7d7-42142c146da9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712786 4710 generic.go:334] "Generic (PLEG): container finished" podID="ee089353-0557-43c7-b7d7-42142c146da9" containerID="f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955" exitCode=0 Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712831 4710 generic.go:334] "Generic (PLEG): container finished" podID="ee089353-0557-43c7-b7d7-42142c146da9" containerID="31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57" exitCode=2 Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712841 4710 generic.go:334] "Generic (PLEG): container finished" podID="ee089353-0557-43c7-b7d7-42142c146da9" containerID="ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa" exitCode=0 Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712850 4710 generic.go:334] "Generic (PLEG): container finished" podID="ee089353-0557-43c7-b7d7-42142c146da9" containerID="9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd" exitCode=0 Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712871 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerDied","Data":"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955"} Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712904 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerDied","Data":"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57"} Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712918 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerDied","Data":"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa"} Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712929 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerDied","Data":"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd"} Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712942 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ee089353-0557-43c7-b7d7-42142c146da9","Type":"ContainerDied","Data":"ef56ad001c64e24bb5f48c60a37c39fec3ca072e2b962838baa0e0398fb0afe0"} Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.712962 4710 scope.go:117] "RemoveContainer" containerID="f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.713870 4710 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.713907 4710 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.713925 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh2qj\" (UniqueName: \"kubernetes.io/projected/ee089353-0557-43c7-b7d7-42142c146da9-kube-api-access-vh2qj\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.713942 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.713953 4710 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.713965 4710 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ee089353-0557-43c7-b7d7-42142c146da9-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.713990 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.724846 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee089353-0557-43c7-b7d7-42142c146da9" (UID: "ee089353-0557-43c7-b7d7-42142c146da9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.745339 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-config-data" (OuterVolumeSpecName: "config-data") pod "ee089353-0557-43c7-b7d7-42142c146da9" (UID: "ee089353-0557-43c7-b7d7-42142c146da9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.751297 4710 scope.go:117] "RemoveContainer" containerID="31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.787366 4710 scope.go:117] "RemoveContainer" containerID="ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.812275 4710 scope.go:117] "RemoveContainer" containerID="9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.815690 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:34 crc kubenswrapper[4710]: I1128 17:21:34.815725 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee089353-0557-43c7-b7d7-42142c146da9-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.011304 4710 scope.go:117] "RemoveContainer" containerID="f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.012142 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": container with ID starting with f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955 not found: ID does not exist" containerID="f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.012176 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955"} err="failed to get container status \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": rpc error: code = NotFound desc = could not find container \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": container with ID starting with f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.012203 4710 scope.go:117] "RemoveContainer" containerID="31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.012901 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": container with ID starting with 31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57 not found: ID does not exist" containerID="31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.012919 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57"} err="failed to get container status \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": rpc error: code = NotFound desc = could not find container \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": container with ID starting with 31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.012932 4710 scope.go:117] "RemoveContainer" containerID="ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.013175 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": container with ID starting with ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa not found: ID does not exist" containerID="ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.013190 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa"} err="failed to get container status \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": rpc error: code = NotFound desc = could not find container \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": container with ID starting with ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.013203 4710 scope.go:117] "RemoveContainer" containerID="9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.013511 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": container with ID starting with 9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd not found: ID does not exist" containerID="9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.013531 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd"} err="failed to get container status \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": rpc error: code = NotFound desc = could not find container \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": container with ID starting with 9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.013543 4710 scope.go:117] "RemoveContainer" containerID="f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.013807 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955"} err="failed to get container status \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": rpc error: code = NotFound desc = could not find container \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": container with ID starting with f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.013830 4710 scope.go:117] "RemoveContainer" containerID="31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.014160 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57"} err="failed to get container status \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": rpc error: code = NotFound desc = could not find container \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": container with ID starting with 31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.014174 4710 scope.go:117] "RemoveContainer" containerID="ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.021317 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa"} err="failed to get container status \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": rpc error: code = NotFound desc = could not find container \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": container with ID starting with ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.021372 4710 scope.go:117] "RemoveContainer" containerID="9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.021697 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd"} err="failed to get container status \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": rpc error: code = NotFound desc = could not find container \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": container with ID starting with 9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.021725 4710 scope.go:117] "RemoveContainer" containerID="f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.022707 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955"} err="failed to get container status \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": rpc error: code = NotFound desc = could not find container \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": container with ID starting with f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.022724 4710 scope.go:117] "RemoveContainer" containerID="31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.024176 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57"} err="failed to get container status \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": rpc error: code = NotFound desc = could not find container \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": container with ID starting with 31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.024211 4710 scope.go:117] "RemoveContainer" containerID="ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.026052 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa"} err="failed to get container status \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": rpc error: code = NotFound desc = could not find container \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": container with ID starting with ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.026082 4710 scope.go:117] "RemoveContainer" containerID="9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.026380 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd"} err="failed to get container status \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": rpc error: code = NotFound desc = could not find container \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": container with ID starting with 9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.026407 4710 scope.go:117] "RemoveContainer" containerID="f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.026660 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955"} err="failed to get container status \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": rpc error: code = NotFound desc = could not find container \"f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955\": container with ID starting with f0207c6bd5999803dd31beae117f8f4b18ca761220279277d3da23650ed13955 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.026683 4710 scope.go:117] "RemoveContainer" containerID="31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.027182 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57"} err="failed to get container status \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": rpc error: code = NotFound desc = could not find container \"31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57\": container with ID starting with 31afc24704af2336a6f5bdd43ea8113274224518853a6f82e8365da4e94e0c57 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.027212 4710 scope.go:117] "RemoveContainer" containerID="ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.028809 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa"} err="failed to get container status \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": rpc error: code = NotFound desc = could not find container \"ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa\": container with ID starting with ec2541eb35ae692db224fb80783755baefce49bffa81cc71a4c711ac99d7e5aa not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.029166 4710 scope.go:117] "RemoveContainer" containerID="9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.029577 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd"} err="failed to get container status \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": rpc error: code = NotFound desc = could not find container \"9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd\": container with ID starting with 9547d64935d7bd28fa6efd8295c0fa2b523f7f6f8a1c928f1c67beaace54d6bd not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.062831 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.072304 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.113886 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.114382 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="proxy-httpd" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.114402 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="proxy-httpd" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.114430 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="ceilometer-notification-agent" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.114439 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="ceilometer-notification-agent" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.114457 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="ceilometer-central-agent" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.114463 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="ceilometer-central-agent" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.114488 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="sg-core" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.114495 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="sg-core" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.114731 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="ceilometer-central-agent" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.114776 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="proxy-httpd" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.114799 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="sg-core" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.114810 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee089353-0557-43c7-b7d7-42142c146da9" containerName="ceilometer-notification-agent" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.117135 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.119933 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.120067 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.120106 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.120182 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.156862 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee089353-0557-43c7-b7d7-42142c146da9" path="/var/lib/kubelet/pods/ee089353-0557-43c7-b7d7-42142c146da9/volumes" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.223320 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-config-data\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.223364 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.223494 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.223655 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-scripts\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.223736 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-log-httpd\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.223916 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-run-httpd\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.223959 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.224097 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pgm\" (UniqueName: \"kubernetes.io/projected/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-kube-api-access-t9pgm\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.326414 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-scripts\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.326512 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-log-httpd\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.326636 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-run-httpd\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.326671 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.327106 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-run-httpd\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.327568 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pgm\" (UniqueName: \"kubernetes.io/projected/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-kube-api-access-t9pgm\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.327662 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-config-data\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.327683 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.327787 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.327106 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-log-httpd\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.331544 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.331801 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.332285 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-scripts\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.332319 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-config-data\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.332974 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.345868 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pgm\" (UniqueName: \"kubernetes.io/projected/6ebdff21-cac4-4864-8bc5-47c8d8ca30ca-kube-api-access-t9pgm\") pod \"ceilometer-0\" (UID: \"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca\") " pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.404793 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.450374 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.530247 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-combined-ca-bundle\") pod \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.530589 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2z5b\" (UniqueName: \"kubernetes.io/projected/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-kube-api-access-f2z5b\") pod \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.530705 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-logs\") pod \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.530876 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-config-data\") pod \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\" (UID: \"6b98609a-8c9d-4802-b05c-90b7c7bd9fef\") " Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.531186 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-logs" (OuterVolumeSpecName: "logs") pod "6b98609a-8c9d-4802-b05c-90b7c7bd9fef" (UID: "6b98609a-8c9d-4802-b05c-90b7c7bd9fef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.531969 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.539560 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-kube-api-access-f2z5b" (OuterVolumeSpecName: "kube-api-access-f2z5b") pod "6b98609a-8c9d-4802-b05c-90b7c7bd9fef" (UID: "6b98609a-8c9d-4802-b05c-90b7c7bd9fef"). InnerVolumeSpecName "kube-api-access-f2z5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.570568 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b98609a-8c9d-4802-b05c-90b7c7bd9fef" (UID: "6b98609a-8c9d-4802-b05c-90b7c7bd9fef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.585564 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-config-data" (OuterVolumeSpecName: "config-data") pod "6b98609a-8c9d-4802-b05c-90b7c7bd9fef" (UID: "6b98609a-8c9d-4802-b05c-90b7c7bd9fef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.633514 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.633546 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.633559 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2z5b\" (UniqueName: \"kubernetes.io/projected/6b98609a-8c9d-4802-b05c-90b7c7bd9fef-kube-api-access-f2z5b\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.725203 4710 generic.go:334] "Generic (PLEG): container finished" podID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerID="9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee" exitCode=0 Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.725516 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b98609a-8c9d-4802-b05c-90b7c7bd9fef","Type":"ContainerDied","Data":"9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee"} Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.725540 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6b98609a-8c9d-4802-b05c-90b7c7bd9fef","Type":"ContainerDied","Data":"6471a1ead55130eb994e954a895eadcb49ea516299aa6bdc6211592f7ef89d67"} Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.725556 4710 scope.go:117] "RemoveContainer" containerID="9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.725678 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.749422 4710 scope.go:117] "RemoveContainer" containerID="204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.787307 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.793107 4710 scope.go:117] "RemoveContainer" containerID="9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.798554 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee\": container with ID starting with 9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee not found: ID does not exist" containerID="9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.798616 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee"} err="failed to get container status \"9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee\": rpc error: code = NotFound desc = could not find container \"9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee\": container with ID starting with 9bdc29adefbb0e0db6ea468e182e6ffb3d302940c79bc88d463a6a6e9bc39eee not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.798657 4710 scope.go:117] "RemoveContainer" containerID="204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.799157 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724\": container with ID starting with 204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724 not found: ID does not exist" containerID="204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.799186 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724"} err="failed to get container status \"204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724\": rpc error: code = NotFound desc = could not find container \"204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724\": container with ID starting with 204aa386065bce8401506b0ac452ad4cb9f281028298655b7d42fd55c9b49724 not found: ID does not exist" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.803268 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.815815 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.816493 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-log" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.816511 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-log" Nov 28 17:21:35 crc kubenswrapper[4710]: E1128 17:21:35.816550 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-api" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.816558 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-api" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.816811 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-api" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.816838 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" containerName="nova-api-log" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.818278 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.820588 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.821909 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.822271 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.826020 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.940730 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09bc7732-2c21-488b-b80a-4731542028bf-logs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.940883 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-public-tls-certs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.940975 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.941250 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-config-data\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.941332 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqm9\" (UniqueName: \"kubernetes.io/projected/09bc7732-2c21-488b-b80a-4731542028bf-kube-api-access-6xqm9\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.941395 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:35 crc kubenswrapper[4710]: I1128 17:21:35.986857 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 28 17:21:35 crc kubenswrapper[4710]: W1128 17:21:35.989421 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ebdff21_cac4_4864_8bc5_47c8d8ca30ca.slice/crio-d77f0e6a4183fedc380c357e83b515c7a3eb01d4b99c6b99deffa3e74b60b103 WatchSource:0}: Error finding container d77f0e6a4183fedc380c357e83b515c7a3eb01d4b99c6b99deffa3e74b60b103: Status 404 returned error can't find the container with id d77f0e6a4183fedc380c357e83b515c7a3eb01d4b99c6b99deffa3e74b60b103 Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.042981 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09bc7732-2c21-488b-b80a-4731542028bf-logs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.043095 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-public-tls-certs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.043174 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.043317 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-config-data\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.043358 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xqm9\" (UniqueName: \"kubernetes.io/projected/09bc7732-2c21-488b-b80a-4731542028bf-kube-api-access-6xqm9\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.043421 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.044375 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09bc7732-2c21-488b-b80a-4731542028bf-logs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.048966 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-config-data\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.049262 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-public-tls-certs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.049282 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-internal-tls-certs\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.051008 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.064137 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xqm9\" (UniqueName: \"kubernetes.io/projected/09bc7732-2c21-488b-b80a-4731542028bf-kube-api-access-6xqm9\") pod \"nova-api-0\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.206420 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.732584 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:36 crc kubenswrapper[4710]: W1128 17:21:36.737152 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09bc7732_2c21_488b_b80a_4731542028bf.slice/crio-a7757b0d9f3be8e107c81a4b0fa74dd555b81fc79cabd5aa6337f9e4fc4b6111 WatchSource:0}: Error finding container a7757b0d9f3be8e107c81a4b0fa74dd555b81fc79cabd5aa6337f9e4fc4b6111: Status 404 returned error can't find the container with id a7757b0d9f3be8e107c81a4b0fa74dd555b81fc79cabd5aa6337f9e4fc4b6111 Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.743683 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca","Type":"ContainerStarted","Data":"d77f0e6a4183fedc380c357e83b515c7a3eb01d4b99c6b99deffa3e74b60b103"} Nov 28 17:21:36 crc kubenswrapper[4710]: I1128 17:21:36.990172 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.008436 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.155578 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b98609a-8c9d-4802-b05c-90b7c7bd9fef" path="/var/lib/kubelet/pods/6b98609a-8c9d-4802-b05c-90b7c7bd9fef/volumes" Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.756829 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca","Type":"ContainerStarted","Data":"deef9bfc7e821b79fce4afbab69d42bcd055599585e99474bf80902681bbbd1f"} Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.757145 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca","Type":"ContainerStarted","Data":"c517b7dc37ba3cf9323f7fe01f4de75fe758f8877aff85180fff481cd2753851"} Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.761518 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09bc7732-2c21-488b-b80a-4731542028bf","Type":"ContainerStarted","Data":"8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80"} Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.761579 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09bc7732-2c21-488b-b80a-4731542028bf","Type":"ContainerStarted","Data":"41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07"} Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.761593 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09bc7732-2c21-488b-b80a-4731542028bf","Type":"ContainerStarted","Data":"a7757b0d9f3be8e107c81a4b0fa74dd555b81fc79cabd5aa6337f9e4fc4b6111"} Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.775105 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.795586 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.795558546 podStartE2EDuration="2.795558546s" podCreationTimestamp="2025-11-28 17:21:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:37.784144891 +0000 UTC m=+1387.042444936" watchObservedRunningTime="2025-11-28 17:21:37.795558546 +0000 UTC m=+1387.053858591" Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.942710 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-wqppl"] Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.944802 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.946388 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.947475 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 28 17:21:37 crc kubenswrapper[4710]: I1128 17:21:37.964784 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqppl"] Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.026042 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wg9v\" (UniqueName: \"kubernetes.io/projected/e41c8bff-334a-4b57-bff0-c5716b30514c-kube-api-access-7wg9v\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.026184 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-config-data\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.026328 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.026488 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-scripts\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.127915 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wg9v\" (UniqueName: \"kubernetes.io/projected/e41c8bff-334a-4b57-bff0-c5716b30514c-kube-api-access-7wg9v\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.128011 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-config-data\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.128106 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.128221 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-scripts\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.133926 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-config-data\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.134575 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.135570 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-scripts\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.150870 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wg9v\" (UniqueName: \"kubernetes.io/projected/e41c8bff-334a-4b57-bff0-c5716b30514c-kube-api-access-7wg9v\") pod \"nova-cell1-cell-mapping-wqppl\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.269751 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:38 crc kubenswrapper[4710]: I1128 17:21:38.766813 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqppl"] Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.156801 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.228855 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-px6tn"] Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.229137 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" podUID="4686c7be-8677-4c5c-801b-dc821197c301" containerName="dnsmasq-dns" containerID="cri-o://0a47fc515666d02a6bf6d00177a8219f859bcd57365186f5c18a671ee974b152" gracePeriod=10 Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.822214 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqppl" event={"ID":"e41c8bff-334a-4b57-bff0-c5716b30514c","Type":"ContainerStarted","Data":"0a3e7c38c956376398bb8781b3c9d480ed0bf396fce83fa6933948f55401c179"} Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.822534 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqppl" event={"ID":"e41c8bff-334a-4b57-bff0-c5716b30514c","Type":"ContainerStarted","Data":"78bccc3d30ef3e2533a234488ff3f9a568cd257bbddf28034012970054d4f0d7"} Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.833027 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca","Type":"ContainerStarted","Data":"5326d1bf07ca45e884db20f4c7eb7e90e029634f8e8e6e7275fff673abfa2b98"} Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.842238 4710 generic.go:334] "Generic (PLEG): container finished" podID="4686c7be-8677-4c5c-801b-dc821197c301" containerID="0a47fc515666d02a6bf6d00177a8219f859bcd57365186f5c18a671ee974b152" exitCode=0 Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.842318 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" event={"ID":"4686c7be-8677-4c5c-801b-dc821197c301","Type":"ContainerDied","Data":"0a47fc515666d02a6bf6d00177a8219f859bcd57365186f5c18a671ee974b152"} Nov 28 17:21:39 crc kubenswrapper[4710]: I1128 17:21:39.843028 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-wqppl" podStartSLOduration=2.843016776 podStartE2EDuration="2.843016776s" podCreationTimestamp="2025-11-28 17:21:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:39.841339584 +0000 UTC m=+1389.099639629" watchObservedRunningTime="2025-11-28 17:21:39.843016776 +0000 UTC m=+1389.101316821" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.135478 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.288129 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-swift-storage-0\") pod \"4686c7be-8677-4c5c-801b-dc821197c301\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.288217 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-config\") pod \"4686c7be-8677-4c5c-801b-dc821197c301\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.288290 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-svc\") pod \"4686c7be-8677-4c5c-801b-dc821197c301\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.288359 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-nb\") pod \"4686c7be-8677-4c5c-801b-dc821197c301\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.288461 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-sb\") pod \"4686c7be-8677-4c5c-801b-dc821197c301\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.288545 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qs9h\" (UniqueName: \"kubernetes.io/projected/4686c7be-8677-4c5c-801b-dc821197c301-kube-api-access-6qs9h\") pod \"4686c7be-8677-4c5c-801b-dc821197c301\" (UID: \"4686c7be-8677-4c5c-801b-dc821197c301\") " Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.298059 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4686c7be-8677-4c5c-801b-dc821197c301-kube-api-access-6qs9h" (OuterVolumeSpecName: "kube-api-access-6qs9h") pod "4686c7be-8677-4c5c-801b-dc821197c301" (UID: "4686c7be-8677-4c5c-801b-dc821197c301"). InnerVolumeSpecName "kube-api-access-6qs9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.347106 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-config" (OuterVolumeSpecName: "config") pod "4686c7be-8677-4c5c-801b-dc821197c301" (UID: "4686c7be-8677-4c5c-801b-dc821197c301"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.358164 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4686c7be-8677-4c5c-801b-dc821197c301" (UID: "4686c7be-8677-4c5c-801b-dc821197c301"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.359634 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4686c7be-8677-4c5c-801b-dc821197c301" (UID: "4686c7be-8677-4c5c-801b-dc821197c301"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.363367 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4686c7be-8677-4c5c-801b-dc821197c301" (UID: "4686c7be-8677-4c5c-801b-dc821197c301"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.366165 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4686c7be-8677-4c5c-801b-dc821197c301" (UID: "4686c7be-8677-4c5c-801b-dc821197c301"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.392659 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.392709 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.392722 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.392736 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.392746 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4686c7be-8677-4c5c-801b-dc821197c301-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.392796 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qs9h\" (UniqueName: \"kubernetes.io/projected/4686c7be-8677-4c5c-801b-dc821197c301-kube-api-access-6qs9h\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.855598 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.855590 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-px6tn" event={"ID":"4686c7be-8677-4c5c-801b-dc821197c301","Type":"ContainerDied","Data":"bdacdd2e1e9210f8ea5c5c386ed7029ae529e34d08798365e88cf291dd3708e2"} Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.856067 4710 scope.go:117] "RemoveContainer" containerID="0a47fc515666d02a6bf6d00177a8219f859bcd57365186f5c18a671ee974b152" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.866075 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6ebdff21-cac4-4864-8bc5-47c8d8ca30ca","Type":"ContainerStarted","Data":"1f8488b0d091bf4caf05a43bbbdb10fa7116758845180e7a266b8430d38fe260"} Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.866219 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.892612 4710 scope.go:117] "RemoveContainer" containerID="aaf1c3c96a766c41722fee7af5651119eccf5b11f0063b7499a39d114e8b657c" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.920875 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.414321916 podStartE2EDuration="5.920845898s" podCreationTimestamp="2025-11-28 17:21:35 +0000 UTC" firstStartedPulling="2025-11-28 17:21:35.992051173 +0000 UTC m=+1385.250351228" lastFinishedPulling="2025-11-28 17:21:40.498575165 +0000 UTC m=+1389.756875210" observedRunningTime="2025-11-28 17:21:40.904522931 +0000 UTC m=+1390.162823006" watchObservedRunningTime="2025-11-28 17:21:40.920845898 +0000 UTC m=+1390.179145943" Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.940724 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-px6tn"] Nov 28 17:21:40 crc kubenswrapper[4710]: I1128 17:21:40.952539 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-px6tn"] Nov 28 17:21:41 crc kubenswrapper[4710]: I1128 17:21:41.154888 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4686c7be-8677-4c5c-801b-dc821197c301" path="/var/lib/kubelet/pods/4686c7be-8677-4c5c-801b-dc821197c301/volumes" Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.344517 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.345146 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.345199 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.346107 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21fd4e025722a9602a1e946aa30e2ca8c2a97b408a56cd641a1c9d99fc13a61e"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.346173 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://21fd4e025722a9602a1e946aa30e2ca8c2a97b408a56cd641a1c9d99fc13a61e" gracePeriod=600 Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.902523 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="21fd4e025722a9602a1e946aa30e2ca8c2a97b408a56cd641a1c9d99fc13a61e" exitCode=0 Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.902589 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"21fd4e025722a9602a1e946aa30e2ca8c2a97b408a56cd641a1c9d99fc13a61e"} Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.902839 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33"} Nov 28 17:21:43 crc kubenswrapper[4710]: I1128 17:21:43.902857 4710 scope.go:117] "RemoveContainer" containerID="fb26b81e49ab86b80e712b9b1ccbaa329c394a8a23985c1f1e0d00b07d836649" Nov 28 17:21:44 crc kubenswrapper[4710]: I1128 17:21:44.933553 4710 generic.go:334] "Generic (PLEG): container finished" podID="e41c8bff-334a-4b57-bff0-c5716b30514c" containerID="0a3e7c38c956376398bb8781b3c9d480ed0bf396fce83fa6933948f55401c179" exitCode=0 Nov 28 17:21:44 crc kubenswrapper[4710]: I1128 17:21:44.933631 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqppl" event={"ID":"e41c8bff-334a-4b57-bff0-c5716b30514c","Type":"ContainerDied","Data":"0a3e7c38c956376398bb8781b3c9d480ed0bf396fce83fa6933948f55401c179"} Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.206606 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.207102 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.393159 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.534226 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-combined-ca-bundle\") pod \"e41c8bff-334a-4b57-bff0-c5716b30514c\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.534454 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-config-data\") pod \"e41c8bff-334a-4b57-bff0-c5716b30514c\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.534488 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wg9v\" (UniqueName: \"kubernetes.io/projected/e41c8bff-334a-4b57-bff0-c5716b30514c-kube-api-access-7wg9v\") pod \"e41c8bff-334a-4b57-bff0-c5716b30514c\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.534503 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-scripts\") pod \"e41c8bff-334a-4b57-bff0-c5716b30514c\" (UID: \"e41c8bff-334a-4b57-bff0-c5716b30514c\") " Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.540748 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41c8bff-334a-4b57-bff0-c5716b30514c-kube-api-access-7wg9v" (OuterVolumeSpecName: "kube-api-access-7wg9v") pod "e41c8bff-334a-4b57-bff0-c5716b30514c" (UID: "e41c8bff-334a-4b57-bff0-c5716b30514c"). InnerVolumeSpecName "kube-api-access-7wg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.541483 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-scripts" (OuterVolumeSpecName: "scripts") pod "e41c8bff-334a-4b57-bff0-c5716b30514c" (UID: "e41c8bff-334a-4b57-bff0-c5716b30514c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.569152 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e41c8bff-334a-4b57-bff0-c5716b30514c" (UID: "e41c8bff-334a-4b57-bff0-c5716b30514c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.573749 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-config-data" (OuterVolumeSpecName: "config-data") pod "e41c8bff-334a-4b57-bff0-c5716b30514c" (UID: "e41c8bff-334a-4b57-bff0-c5716b30514c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.636971 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.637013 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.637025 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wg9v\" (UniqueName: \"kubernetes.io/projected/e41c8bff-334a-4b57-bff0-c5716b30514c-kube-api-access-7wg9v\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.637040 4710 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e41c8bff-334a-4b57-bff0-c5716b30514c-scripts\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.955317 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-wqppl" event={"ID":"e41c8bff-334a-4b57-bff0-c5716b30514c","Type":"ContainerDied","Data":"78bccc3d30ef3e2533a234488ff3f9a568cd257bbddf28034012970054d4f0d7"} Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.955353 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78bccc3d30ef3e2533a234488ff3f9a568cd257bbddf28034012970054d4f0d7" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:46.955374 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-wqppl" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.188638 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.188893 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b7b77a7d-87ae-49de-bd1e-cabc067b1966" containerName="nova-scheduler-scheduler" containerID="cri-o://f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb" gracePeriod=30 Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.221984 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.222052 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.226500 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.226737 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-log" containerID="cri-o://41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07" gracePeriod=30 Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.227067 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-api" containerID="cri-o://8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80" gracePeriod=30 Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.246325 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.246588 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-log" containerID="cri-o://c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7" gracePeriod=30 Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.246912 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-metadata" containerID="cri-o://449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486" gracePeriod=30 Nov 28 17:21:47 crc kubenswrapper[4710]: E1128 17:21:47.644262 4710 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:21:47 crc kubenswrapper[4710]: E1128 17:21:47.645815 4710 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:21:47 crc kubenswrapper[4710]: E1128 17:21:47.648236 4710 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 28 17:21:47 crc kubenswrapper[4710]: E1128 17:21:47.648313 4710 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b7b77a7d-87ae-49de-bd1e-cabc067b1966" containerName="nova-scheduler-scheduler" Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.966889 4710 generic.go:334] "Generic (PLEG): container finished" podID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerID="c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7" exitCode=143 Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.966958 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfb833a1-27e7-478e-a7a6-e92d529a6f8b","Type":"ContainerDied","Data":"c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7"} Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.969144 4710 generic.go:334] "Generic (PLEG): container finished" podID="09bc7732-2c21-488b-b80a-4731542028bf" containerID="41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07" exitCode=143 Nov 28 17:21:47 crc kubenswrapper[4710]: I1128 17:21:47.969205 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09bc7732-2c21-488b-b80a-4731542028bf","Type":"ContainerDied","Data":"41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07"} Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.397697 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:42294->10.217.0.217:8775: read: connection reset by peer" Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.397728 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.217:8775/\": read tcp 10.217.0.2:42292->10.217.0.217:8775: read: connection reset by peer" Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.838042 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.930652 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-nova-metadata-tls-certs\") pod \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.930719 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-894gh\" (UniqueName: \"kubernetes.io/projected/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-kube-api-access-894gh\") pod \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.930872 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-logs\") pod \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.930903 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-combined-ca-bundle\") pod \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.931337 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-config-data\") pod \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\" (UID: \"cfb833a1-27e7-478e-a7a6-e92d529a6f8b\") " Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.931288 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-logs" (OuterVolumeSpecName: "logs") pod "cfb833a1-27e7-478e-a7a6-e92d529a6f8b" (UID: "cfb833a1-27e7-478e-a7a6-e92d529a6f8b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.932014 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.945612 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-kube-api-access-894gh" (OuterVolumeSpecName: "kube-api-access-894gh") pod "cfb833a1-27e7-478e-a7a6-e92d529a6f8b" (UID: "cfb833a1-27e7-478e-a7a6-e92d529a6f8b"). InnerVolumeSpecName "kube-api-access-894gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.966538 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-config-data" (OuterVolumeSpecName: "config-data") pod "cfb833a1-27e7-478e-a7a6-e92d529a6f8b" (UID: "cfb833a1-27e7-478e-a7a6-e92d529a6f8b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:50 crc kubenswrapper[4710]: I1128 17:21:50.966742 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfb833a1-27e7-478e-a7a6-e92d529a6f8b" (UID: "cfb833a1-27e7-478e-a7a6-e92d529a6f8b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.000165 4710 generic.go:334] "Generic (PLEG): container finished" podID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerID="449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486" exitCode=0 Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.000230 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfb833a1-27e7-478e-a7a6-e92d529a6f8b","Type":"ContainerDied","Data":"449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486"} Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.000265 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"cfb833a1-27e7-478e-a7a6-e92d529a6f8b","Type":"ContainerDied","Data":"e1b7458c1f2437828884e418706ce4252dc2c83ebe42d79af520cddfd8b63f0b"} Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.000305 4710 scope.go:117] "RemoveContainer" containerID="449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.000266 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.027123 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "cfb833a1-27e7-478e-a7a6-e92d529a6f8b" (UID: "cfb833a1-27e7-478e-a7a6-e92d529a6f8b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.033646 4710 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.033686 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-894gh\" (UniqueName: \"kubernetes.io/projected/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-kube-api-access-894gh\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.033699 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.033712 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb833a1-27e7-478e-a7a6-e92d529a6f8b-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.089099 4710 scope.go:117] "RemoveContainer" containerID="c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.115211 4710 scope.go:117] "RemoveContainer" containerID="449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486" Nov 28 17:21:51 crc kubenswrapper[4710]: E1128 17:21:51.115681 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486\": container with ID starting with 449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486 not found: ID does not exist" containerID="449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.115711 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486"} err="failed to get container status \"449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486\": rpc error: code = NotFound desc = could not find container \"449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486\": container with ID starting with 449af9ac0d3fc2a9cd43171754786fab71af9b404dce120a06bc39d294860486 not found: ID does not exist" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.115814 4710 scope.go:117] "RemoveContainer" containerID="c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7" Nov 28 17:21:51 crc kubenswrapper[4710]: E1128 17:21:51.116032 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7\": container with ID starting with c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7 not found: ID does not exist" containerID="c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.116054 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7"} err="failed to get container status \"c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7\": rpc error: code = NotFound desc = could not find container \"c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7\": container with ID starting with c34fddd0965653232d336aa9b3959f7ae7cc1b221c9ad446adbe02f608736ba7 not found: ID does not exist" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.327531 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.337724 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.350872 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:51 crc kubenswrapper[4710]: E1128 17:21:51.351342 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-log" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351367 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-log" Nov 28 17:21:51 crc kubenswrapper[4710]: E1128 17:21:51.351395 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41c8bff-334a-4b57-bff0-c5716b30514c" containerName="nova-manage" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351401 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41c8bff-334a-4b57-bff0-c5716b30514c" containerName="nova-manage" Nov 28 17:21:51 crc kubenswrapper[4710]: E1128 17:21:51.351440 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4686c7be-8677-4c5c-801b-dc821197c301" containerName="dnsmasq-dns" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351447 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="4686c7be-8677-4c5c-801b-dc821197c301" containerName="dnsmasq-dns" Nov 28 17:21:51 crc kubenswrapper[4710]: E1128 17:21:51.351456 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-metadata" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351461 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-metadata" Nov 28 17:21:51 crc kubenswrapper[4710]: E1128 17:21:51.351472 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4686c7be-8677-4c5c-801b-dc821197c301" containerName="init" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351479 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="4686c7be-8677-4c5c-801b-dc821197c301" containerName="init" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351666 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="4686c7be-8677-4c5c-801b-dc821197c301" containerName="dnsmasq-dns" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351690 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-log" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351704 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41c8bff-334a-4b57-bff0-c5716b30514c" containerName="nova-manage" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.351730 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" containerName="nova-metadata-metadata" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.353684 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.359799 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.360182 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.372697 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.455166 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0fff04-08c6-4268-8534-fa5b2e28e58f-logs\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.455515 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-config-data\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.455561 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.455584 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.455659 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5lf\" (UniqueName: \"kubernetes.io/projected/6f0fff04-08c6-4268-8534-fa5b2e28e58f-kube-api-access-mn5lf\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.558237 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0fff04-08c6-4268-8534-fa5b2e28e58f-logs\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.558313 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-config-data\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.558378 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.558414 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.558712 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6f0fff04-08c6-4268-8534-fa5b2e28e58f-logs\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.559864 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn5lf\" (UniqueName: \"kubernetes.io/projected/6f0fff04-08c6-4268-8534-fa5b2e28e58f-kube-api-access-mn5lf\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.563736 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.563874 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.564317 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0fff04-08c6-4268-8534-fa5b2e28e58f-config-data\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.576273 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn5lf\" (UniqueName: \"kubernetes.io/projected/6f0fff04-08c6-4268-8534-fa5b2e28e58f-kube-api-access-mn5lf\") pod \"nova-metadata-0\" (UID: \"6f0fff04-08c6-4268-8534-fa5b2e28e58f\") " pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.717655 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.859520 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.970470 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfd29\" (UniqueName: \"kubernetes.io/projected/b7b77a7d-87ae-49de-bd1e-cabc067b1966-kube-api-access-hfd29\") pod \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.970675 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-config-data\") pod \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.970779 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-combined-ca-bundle\") pod \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\" (UID: \"b7b77a7d-87ae-49de-bd1e-cabc067b1966\") " Nov 28 17:21:51 crc kubenswrapper[4710]: I1128 17:21:51.976614 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b77a7d-87ae-49de-bd1e-cabc067b1966-kube-api-access-hfd29" (OuterVolumeSpecName: "kube-api-access-hfd29") pod "b7b77a7d-87ae-49de-bd1e-cabc067b1966" (UID: "b7b77a7d-87ae-49de-bd1e-cabc067b1966"). InnerVolumeSpecName "kube-api-access-hfd29". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.024465 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-config-data" (OuterVolumeSpecName: "config-data") pod "b7b77a7d-87ae-49de-bd1e-cabc067b1966" (UID: "b7b77a7d-87ae-49de-bd1e-cabc067b1966"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.026334 4710 generic.go:334] "Generic (PLEG): container finished" podID="b7b77a7d-87ae-49de-bd1e-cabc067b1966" containerID="f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb" exitCode=0 Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.026524 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7b77a7d-87ae-49de-bd1e-cabc067b1966","Type":"ContainerDied","Data":"f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb"} Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.026700 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b7b77a7d-87ae-49de-bd1e-cabc067b1966","Type":"ContainerDied","Data":"630b01d71499d98a93031ac2787f877ccf477a06ce97fd405d767f522e7b9921"} Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.026732 4710 scope.go:117] "RemoveContainer" containerID="f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.026598 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.038502 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b7b77a7d-87ae-49de-bd1e-cabc067b1966" (UID: "b7b77a7d-87ae-49de-bd1e-cabc067b1966"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.049538 4710 scope.go:117] "RemoveContainer" containerID="f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb" Nov 28 17:21:52 crc kubenswrapper[4710]: E1128 17:21:52.050105 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb\": container with ID starting with f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb not found: ID does not exist" containerID="f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.050176 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb"} err="failed to get container status \"f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb\": rpc error: code = NotFound desc = could not find container \"f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb\": container with ID starting with f28bf6ff11266a9b1d568a236c282edef2c38249d739cda2e8a686ea316c7ccb not found: ID does not exist" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.074523 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.074568 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7b77a7d-87ae-49de-bd1e-cabc067b1966-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.074579 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfd29\" (UniqueName: \"kubernetes.io/projected/b7b77a7d-87ae-49de-bd1e-cabc067b1966-kube-api-access-hfd29\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.200020 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 28 17:21:52 crc kubenswrapper[4710]: W1128 17:21:52.206849 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f0fff04_08c6_4268_8534_fa5b2e28e58f.slice/crio-c4195af569bb17e12cf22de8c5de2cc1fb37936d9495f02785b758eb2ce84b85 WatchSource:0}: Error finding container c4195af569bb17e12cf22de8c5de2cc1fb37936d9495f02785b758eb2ce84b85: Status 404 returned error can't find the container with id c4195af569bb17e12cf22de8c5de2cc1fb37936d9495f02785b758eb2ce84b85 Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.361945 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.369612 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.383982 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:52 crc kubenswrapper[4710]: E1128 17:21:52.384377 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7b77a7d-87ae-49de-bd1e-cabc067b1966" containerName="nova-scheduler-scheduler" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.384394 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7b77a7d-87ae-49de-bd1e-cabc067b1966" containerName="nova-scheduler-scheduler" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.385023 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7b77a7d-87ae-49de-bd1e-cabc067b1966" containerName="nova-scheduler-scheduler" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.385725 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.388941 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.403659 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.483614 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362c81cb-3e82-49e0-be70-7206bcd8ebe8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.483681 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362c81cb-3e82-49e0-be70-7206bcd8ebe8-config-data\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.483711 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btxwj\" (UniqueName: \"kubernetes.io/projected/362c81cb-3e82-49e0-be70-7206bcd8ebe8-kube-api-access-btxwj\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.585497 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362c81cb-3e82-49e0-be70-7206bcd8ebe8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.585896 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362c81cb-3e82-49e0-be70-7206bcd8ebe8-config-data\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.585942 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btxwj\" (UniqueName: \"kubernetes.io/projected/362c81cb-3e82-49e0-be70-7206bcd8ebe8-kube-api-access-btxwj\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.591455 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/362c81cb-3e82-49e0-be70-7206bcd8ebe8-config-data\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.591813 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/362c81cb-3e82-49e0-be70-7206bcd8ebe8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.609443 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btxwj\" (UniqueName: \"kubernetes.io/projected/362c81cb-3e82-49e0-be70-7206bcd8ebe8-kube-api-access-btxwj\") pod \"nova-scheduler-0\" (UID: \"362c81cb-3e82-49e0-be70-7206bcd8ebe8\") " pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.715816 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 28 17:21:52 crc kubenswrapper[4710]: I1128 17:21:52.984415 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.041806 4710 generic.go:334] "Generic (PLEG): container finished" podID="09bc7732-2c21-488b-b80a-4731542028bf" containerID="8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80" exitCode=0 Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.041866 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09bc7732-2c21-488b-b80a-4731542028bf","Type":"ContainerDied","Data":"8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80"} Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.041892 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09bc7732-2c21-488b-b80a-4731542028bf","Type":"ContainerDied","Data":"a7757b0d9f3be8e107c81a4b0fa74dd555b81fc79cabd5aa6337f9e4fc4b6111"} Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.041910 4710 scope.go:117] "RemoveContainer" containerID="8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.042018 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.045445 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f0fff04-08c6-4268-8534-fa5b2e28e58f","Type":"ContainerStarted","Data":"a63ea68508b05e36b207194a1bcd6618dc3592f65a57b118d381fcfd0195ffbb"} Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.045477 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f0fff04-08c6-4268-8534-fa5b2e28e58f","Type":"ContainerStarted","Data":"7fccc564474baae9eb844bee70ee84c5094c14e4cc87fad7bec1903ec7c4b508"} Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.045487 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6f0fff04-08c6-4268-8534-fa5b2e28e58f","Type":"ContainerStarted","Data":"c4195af569bb17e12cf22de8c5de2cc1fb37936d9495f02785b758eb2ce84b85"} Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.085780 4710 scope.go:117] "RemoveContainer" containerID="41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.109576 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-public-tls-certs\") pod \"09bc7732-2c21-488b-b80a-4731542028bf\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.109912 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xqm9\" (UniqueName: \"kubernetes.io/projected/09bc7732-2c21-488b-b80a-4731542028bf-kube-api-access-6xqm9\") pod \"09bc7732-2c21-488b-b80a-4731542028bf\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.110111 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-internal-tls-certs\") pod \"09bc7732-2c21-488b-b80a-4731542028bf\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.110181 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-combined-ca-bundle\") pod \"09bc7732-2c21-488b-b80a-4731542028bf\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.110318 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-config-data\") pod \"09bc7732-2c21-488b-b80a-4731542028bf\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.110456 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09bc7732-2c21-488b-b80a-4731542028bf-logs\") pod \"09bc7732-2c21-488b-b80a-4731542028bf\" (UID: \"09bc7732-2c21-488b-b80a-4731542028bf\") " Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.111841 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09bc7732-2c21-488b-b80a-4731542028bf-logs" (OuterVolumeSpecName: "logs") pod "09bc7732-2c21-488b-b80a-4731542028bf" (UID: "09bc7732-2c21-488b-b80a-4731542028bf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.117096 4710 scope.go:117] "RemoveContainer" containerID="8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80" Nov 28 17:21:53 crc kubenswrapper[4710]: E1128 17:21:53.117810 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80\": container with ID starting with 8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80 not found: ID does not exist" containerID="8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.117851 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80"} err="failed to get container status \"8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80\": rpc error: code = NotFound desc = could not find container \"8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80\": container with ID starting with 8c2809eb9228d961b5b7d96766ac1f2985b067b1b621c66efe1515cb98de8c80 not found: ID does not exist" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.117875 4710 scope.go:117] "RemoveContainer" containerID="41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07" Nov 28 17:21:53 crc kubenswrapper[4710]: E1128 17:21:53.118282 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07\": container with ID starting with 41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07 not found: ID does not exist" containerID="41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.118352 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07"} err="failed to get container status \"41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07\": rpc error: code = NotFound desc = could not find container \"41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07\": container with ID starting with 41e707afdcbe231e8581541ee4328e80bd544b887e41200584af7f090ce1df07 not found: ID does not exist" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.120122 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.120110114 podStartE2EDuration="2.120110114s" podCreationTimestamp="2025-11-28 17:21:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:53.087150739 +0000 UTC m=+1402.345450784" watchObservedRunningTime="2025-11-28 17:21:53.120110114 +0000 UTC m=+1402.378410159" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.128696 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09bc7732-2c21-488b-b80a-4731542028bf-kube-api-access-6xqm9" (OuterVolumeSpecName: "kube-api-access-6xqm9") pod "09bc7732-2c21-488b-b80a-4731542028bf" (UID: "09bc7732-2c21-488b-b80a-4731542028bf"). InnerVolumeSpecName "kube-api-access-6xqm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.158917 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7b77a7d-87ae-49de-bd1e-cabc067b1966" path="/var/lib/kubelet/pods/b7b77a7d-87ae-49de-bd1e-cabc067b1966/volumes" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.161730 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb833a1-27e7-478e-a7a6-e92d529a6f8b" path="/var/lib/kubelet/pods/cfb833a1-27e7-478e-a7a6-e92d529a6f8b/volumes" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.173332 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09bc7732-2c21-488b-b80a-4731542028bf" (UID: "09bc7732-2c21-488b-b80a-4731542028bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.177197 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "09bc7732-2c21-488b-b80a-4731542028bf" (UID: "09bc7732-2c21-488b-b80a-4731542028bf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.180925 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-config-data" (OuterVolumeSpecName: "config-data") pod "09bc7732-2c21-488b-b80a-4731542028bf" (UID: "09bc7732-2c21-488b-b80a-4731542028bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.182349 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "09bc7732-2c21-488b-b80a-4731542028bf" (UID: "09bc7732-2c21-488b-b80a-4731542028bf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.209613 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.215617 4710 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.215653 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xqm9\" (UniqueName: \"kubernetes.io/projected/09bc7732-2c21-488b-b80a-4731542028bf-kube-api-access-6xqm9\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.215666 4710 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.215677 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.215687 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09bc7732-2c21-488b-b80a-4731542028bf-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.215697 4710 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09bc7732-2c21-488b-b80a-4731542028bf-logs\") on node \"crc\" DevicePath \"\"" Nov 28 17:21:53 crc kubenswrapper[4710]: W1128 17:21:53.216681 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod362c81cb_3e82_49e0_be70_7206bcd8ebe8.slice/crio-cb28943fa91ecab04f1a91dbd17d5ce28f84776c92400639cbc545edcc19fd2b WatchSource:0}: Error finding container cb28943fa91ecab04f1a91dbd17d5ce28f84776c92400639cbc545edcc19fd2b: Status 404 returned error can't find the container with id cb28943fa91ecab04f1a91dbd17d5ce28f84776c92400639cbc545edcc19fd2b Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.396936 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.424286 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.450748 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:53 crc kubenswrapper[4710]: E1128 17:21:53.451815 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-api" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.451838 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-api" Nov 28 17:21:53 crc kubenswrapper[4710]: E1128 17:21:53.451886 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-log" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.451895 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-log" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.452149 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-log" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.452195 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="09bc7732-2c21-488b-b80a-4731542028bf" containerName="nova-api-api" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.453612 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.462709 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.463022 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.463175 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.505261 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.527496 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9drw\" (UniqueName: \"kubernetes.io/projected/b21809f6-0359-4e17-b098-3002764c13c4-kube-api-access-m9drw\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.527552 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b21809f6-0359-4e17-b098-3002764c13c4-logs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.527606 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.527627 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-public-tls-certs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.527660 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-config-data\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.527694 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.628868 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9drw\" (UniqueName: \"kubernetes.io/projected/b21809f6-0359-4e17-b098-3002764c13c4-kube-api-access-m9drw\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.628973 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b21809f6-0359-4e17-b098-3002764c13c4-logs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.629032 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.629382 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b21809f6-0359-4e17-b098-3002764c13c4-logs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.629452 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-public-tls-certs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.630100 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-config-data\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.630161 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.635151 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.635313 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-config-data\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.635601 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.636361 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b21809f6-0359-4e17-b098-3002764c13c4-public-tls-certs\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.656814 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9drw\" (UniqueName: \"kubernetes.io/projected/b21809f6-0359-4e17-b098-3002764c13c4-kube-api-access-m9drw\") pod \"nova-api-0\" (UID: \"b21809f6-0359-4e17-b098-3002764c13c4\") " pod="openstack/nova-api-0" Nov 28 17:21:53 crc kubenswrapper[4710]: I1128 17:21:53.868485 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 28 17:21:54 crc kubenswrapper[4710]: I1128 17:21:54.059913 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"362c81cb-3e82-49e0-be70-7206bcd8ebe8","Type":"ContainerStarted","Data":"06fedd44475d3b113118ceeb76b15395d173e1ba0af006df9fcad641f7d9d794"} Nov 28 17:21:54 crc kubenswrapper[4710]: I1128 17:21:54.060154 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"362c81cb-3e82-49e0-be70-7206bcd8ebe8","Type":"ContainerStarted","Data":"cb28943fa91ecab04f1a91dbd17d5ce28f84776c92400639cbc545edcc19fd2b"} Nov 28 17:21:54 crc kubenswrapper[4710]: I1128 17:21:54.074774 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.074734285 podStartE2EDuration="2.074734285s" podCreationTimestamp="2025-11-28 17:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:54.074145346 +0000 UTC m=+1403.332445401" watchObservedRunningTime="2025-11-28 17:21:54.074734285 +0000 UTC m=+1403.333034330" Nov 28 17:21:54 crc kubenswrapper[4710]: I1128 17:21:54.347252 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 28 17:21:55 crc kubenswrapper[4710]: I1128 17:21:55.071332 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b21809f6-0359-4e17-b098-3002764c13c4","Type":"ContainerStarted","Data":"8d7e3f4df0b5e6673d680c18d5d708141063ae926b79d4ce96b9f2056102f9fc"} Nov 28 17:21:55 crc kubenswrapper[4710]: I1128 17:21:55.071635 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b21809f6-0359-4e17-b098-3002764c13c4","Type":"ContainerStarted","Data":"89f51c8c9302ff24b61941954b0f426956c7788c06422c57e871620735eeacef"} Nov 28 17:21:55 crc kubenswrapper[4710]: I1128 17:21:55.071651 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b21809f6-0359-4e17-b098-3002764c13c4","Type":"ContainerStarted","Data":"c37aaf0d41af1d1b3b6cba799732cc5ee0516900caabb9c82a6f39f4e180ff9a"} Nov 28 17:21:55 crc kubenswrapper[4710]: I1128 17:21:55.101797 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.101778388 podStartE2EDuration="2.101778388s" podCreationTimestamp="2025-11-28 17:21:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:21:55.088124003 +0000 UTC m=+1404.346424058" watchObservedRunningTime="2025-11-28 17:21:55.101778388 +0000 UTC m=+1404.360078433" Nov 28 17:21:55 crc kubenswrapper[4710]: I1128 17:21:55.154288 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09bc7732-2c21-488b-b80a-4731542028bf" path="/var/lib/kubelet/pods/09bc7732-2c21-488b-b80a-4731542028bf/volumes" Nov 28 17:21:56 crc kubenswrapper[4710]: I1128 17:21:56.717946 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:21:56 crc kubenswrapper[4710]: I1128 17:21:56.718375 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 28 17:21:57 crc kubenswrapper[4710]: I1128 17:21:57.716029 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 28 17:22:01 crc kubenswrapper[4710]: I1128 17:22:01.719012 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 17:22:01 crc kubenswrapper[4710]: I1128 17:22:01.719565 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 28 17:22:02 crc kubenswrapper[4710]: I1128 17:22:02.716526 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 28 17:22:02 crc kubenswrapper[4710]: I1128 17:22:02.728956 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6f0fff04-08c6-4268-8534-fa5b2e28e58f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:22:02 crc kubenswrapper[4710]: I1128 17:22:02.729078 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6f0fff04-08c6-4268-8534-fa5b2e28e58f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.227:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:22:02 crc kubenswrapper[4710]: I1128 17:22:02.747075 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 28 17:22:03 crc kubenswrapper[4710]: I1128 17:22:03.186417 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 28 17:22:03 crc kubenswrapper[4710]: I1128 17:22:03.869044 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:22:03 crc kubenswrapper[4710]: I1128 17:22:03.869326 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 28 17:22:04 crc kubenswrapper[4710]: I1128 17:22:04.880119 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b21809f6-0359-4e17-b098-3002764c13c4" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:22:04 crc kubenswrapper[4710]: I1128 17:22:04.880131 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b21809f6-0359-4e17-b098-3002764c13c4" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.229:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 28 17:22:05 crc kubenswrapper[4710]: I1128 17:22:05.525152 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 28 17:22:11 crc kubenswrapper[4710]: I1128 17:22:11.723587 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 17:22:11 crc kubenswrapper[4710]: I1128 17:22:11.724428 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 28 17:22:11 crc kubenswrapper[4710]: I1128 17:22:11.736392 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 17:22:11 crc kubenswrapper[4710]: I1128 17:22:11.738625 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 28 17:22:13 crc kubenswrapper[4710]: I1128 17:22:13.875332 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 17:22:13 crc kubenswrapper[4710]: I1128 17:22:13.876799 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 17:22:13 crc kubenswrapper[4710]: I1128 17:22:13.882911 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 17:22:13 crc kubenswrapper[4710]: I1128 17:22:13.883985 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 28 17:22:14 crc kubenswrapper[4710]: I1128 17:22:14.268905 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 28 17:22:14 crc kubenswrapper[4710]: I1128 17:22:14.276014 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 28 17:22:22 crc kubenswrapper[4710]: I1128 17:22:22.758068 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:22:23 crc kubenswrapper[4710]: I1128 17:22:23.691434 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:22:27 crc kubenswrapper[4710]: I1128 17:22:27.603257 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="01f3773a-064e-4241-8327-758541098113" containerName="rabbitmq" containerID="cri-o://f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0" gracePeriod=604796 Nov 28 17:22:27 crc kubenswrapper[4710]: I1128 17:22:27.992200 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="f399c745-4f4e-44e8-8813-af3861dc0eb0" containerName="rabbitmq" containerID="cri-o://30e9673a2bbd342f419e56170fc3b2ad0e2baead63a1f7877b1373479afe4653" gracePeriod=604796 Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.282288 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.410547 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlgt5\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-kube-api-access-mlgt5\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.410624 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-confd\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.410682 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-server-conf\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.410916 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-erlang-cookie\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.410957 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01f3773a-064e-4241-8327-758541098113-erlang-cookie-secret\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.411084 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-config-data\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.411154 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-tls\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.411209 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-plugins-conf\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.411238 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01f3773a-064e-4241-8327-758541098113-pod-info\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.411304 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-plugins\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.411336 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"01f3773a-064e-4241-8327-758541098113\" (UID: \"01f3773a-064e-4241-8327-758541098113\") " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.417064 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.417850 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.418632 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.429266 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.429390 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-kube-api-access-mlgt5" (OuterVolumeSpecName: "kube-api-access-mlgt5") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "kube-api-access-mlgt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.429453 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/01f3773a-064e-4241-8327-758541098113-pod-info" (OuterVolumeSpecName: "pod-info") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.429714 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.435077 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01f3773a-064e-4241-8327-758541098113-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.521372 4710 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.521775 4710 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.521790 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlgt5\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-kube-api-access-mlgt5\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.521806 4710 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01f3773a-064e-4241-8327-758541098113-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.521817 4710 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01f3773a-064e-4241-8327-758541098113-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.521830 4710 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.521840 4710 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.521850 4710 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01f3773a-064e-4241-8327-758541098113-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.569024 4710 generic.go:334] "Generic (PLEG): container finished" podID="01f3773a-064e-4241-8327-758541098113" containerID="f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0" exitCode=0 Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.569088 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01f3773a-064e-4241-8327-758541098113","Type":"ContainerDied","Data":"f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0"} Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.569115 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01f3773a-064e-4241-8327-758541098113","Type":"ContainerDied","Data":"3c9cc93b0c733783dfec3570d5ce9eeb5563117b18cf2ef28612fccce71ff93a"} Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.569133 4710 scope.go:117] "RemoveContainer" containerID="f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.569277 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.573409 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-config-data" (OuterVolumeSpecName: "config-data") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.577189 4710 generic.go:334] "Generic (PLEG): container finished" podID="f399c745-4f4e-44e8-8813-af3861dc0eb0" containerID="30e9673a2bbd342f419e56170fc3b2ad0e2baead63a1f7877b1373479afe4653" exitCode=0 Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.577210 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f399c745-4f4e-44e8-8813-af3861dc0eb0","Type":"ContainerDied","Data":"30e9673a2bbd342f419e56170fc3b2ad0e2baead63a1f7877b1373479afe4653"} Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.626217 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.652066 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-server-conf" (OuterVolumeSpecName: "server-conf") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.744143 4710 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01f3773a-064e-4241-8327-758541098113-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.754846 4710 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.795113 4710 scope.go:117] "RemoveContainer" containerID="2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.821807 4710 scope.go:117] "RemoveContainer" containerID="f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0" Nov 28 17:22:34 crc kubenswrapper[4710]: E1128 17:22:34.823435 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0\": container with ID starting with f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0 not found: ID does not exist" containerID="f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.823485 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0"} err="failed to get container status \"f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0\": rpc error: code = NotFound desc = could not find container \"f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0\": container with ID starting with f5978dd00c20567c60f57a5232b929a707dd149a202bb3cffc4398646a071fd0 not found: ID does not exist" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.823514 4710 scope.go:117] "RemoveContainer" containerID="2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f" Nov 28 17:22:34 crc kubenswrapper[4710]: E1128 17:22:34.830296 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f\": container with ID starting with 2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f not found: ID does not exist" containerID="2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.830466 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f"} err="failed to get container status \"2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f\": rpc error: code = NotFound desc = could not find container \"2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f\": container with ID starting with 2137df5d62ef4b0f4a44421f12c7fdd55c62b587ce4176a1c2d112cd04431c7f not found: ID does not exist" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.842960 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "01f3773a-064e-4241-8327-758541098113" (UID: "01f3773a-064e-4241-8327-758541098113"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.845068 4710 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.845107 4710 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01f3773a-064e-4241-8327-758541098113-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.976558 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:34 crc kubenswrapper[4710]: I1128 17:22:34.996123 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.006267 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.037291 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: E1128 17:22:35.037926 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f3773a-064e-4241-8327-758541098113" containerName="rabbitmq" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.037949 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f3773a-064e-4241-8327-758541098113" containerName="rabbitmq" Nov 28 17:22:35 crc kubenswrapper[4710]: E1128 17:22:35.037965 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f399c745-4f4e-44e8-8813-af3861dc0eb0" containerName="rabbitmq" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.037974 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f399c745-4f4e-44e8-8813-af3861dc0eb0" containerName="rabbitmq" Nov 28 17:22:35 crc kubenswrapper[4710]: E1128 17:22:35.038012 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f399c745-4f4e-44e8-8813-af3861dc0eb0" containerName="setup-container" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.038021 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f399c745-4f4e-44e8-8813-af3861dc0eb0" containerName="setup-container" Nov 28 17:22:35 crc kubenswrapper[4710]: E1128 17:22:35.038042 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01f3773a-064e-4241-8327-758541098113" containerName="setup-container" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.038051 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="01f3773a-064e-4241-8327-758541098113" containerName="setup-container" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.038352 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="01f3773a-064e-4241-8327-758541098113" containerName="rabbitmq" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.038391 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f399c745-4f4e-44e8-8813-af3861dc0eb0" containerName="rabbitmq" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.039978 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.044470 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.044518 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.044625 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.045082 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.045132 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-pk8nc" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.045178 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.055603 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.068565 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.149765 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-config-data\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150035 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-tls\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150175 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f399c745-4f4e-44e8-8813-af3861dc0eb0-erlang-cookie-secret\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150324 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-erlang-cookie\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150423 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-458jd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-kube-api-access-458jd\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150513 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f399c745-4f4e-44e8-8813-af3861dc0eb0-pod-info\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150590 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-server-conf\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150673 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-plugins\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150814 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150827 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-plugins-conf\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.150999 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-confd\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.151334 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f399c745-4f4e-44e8-8813-af3861dc0eb0\" (UID: \"f399c745-4f4e-44e8-8813-af3861dc0eb0\") " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.151815 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e35eae-e3e4-43df-83fb-4a2233406e73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.151952 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.151393 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.151860 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152368 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152428 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152529 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4r7g\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-kube-api-access-g4r7g\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152598 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e35eae-e3e4-43df-83fb-4a2233406e73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152743 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152796 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152831 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152875 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.152947 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.153025 4710 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.153035 4710 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.153045 4710 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.155652 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-kube-api-access-458jd" (OuterVolumeSpecName: "kube-api-access-458jd") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "kube-api-access-458jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.156483 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f399c745-4f4e-44e8-8813-af3861dc0eb0-pod-info" (OuterVolumeSpecName: "pod-info") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.156623 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01f3773a-064e-4241-8327-758541098113" path="/var/lib/kubelet/pods/01f3773a-064e-4241-8327-758541098113/volumes" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.160404 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.175709 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.183180 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399c745-4f4e-44e8-8813-af3861dc0eb0-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.210002 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-config-data" (OuterVolumeSpecName: "config-data") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.229475 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-server-conf" (OuterVolumeSpecName: "server-conf") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256048 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256137 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256284 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4r7g\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-kube-api-access-g4r7g\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256400 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e35eae-e3e4-43df-83fb-4a2233406e73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256534 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256569 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256605 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256639 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256704 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256865 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e35eae-e3e4-43df-83fb-4a2233406e73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256924 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.256994 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.257390 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.257932 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258095 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258298 4710 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258310 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258324 4710 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f399c745-4f4e-44e8-8813-af3861dc0eb0-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258341 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-458jd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-kube-api-access-458jd\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258357 4710 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f399c745-4f4e-44e8-8813-af3861dc0eb0-pod-info\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258370 4710 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-server-conf\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258414 4710 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.258438 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f399c745-4f4e-44e8-8813-af3861dc0eb0-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.260129 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.261059 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9e35eae-e3e4-43df-83fb-4a2233406e73-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.265388 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9e35eae-e3e4-43df-83fb-4a2233406e73-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.272740 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9e35eae-e3e4-43df-83fb-4a2233406e73-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.275414 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.276279 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4r7g\" (UniqueName: \"kubernetes.io/projected/a9e35eae-e3e4-43df-83fb-4a2233406e73-kube-api-access-g4r7g\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.299972 4710 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.306568 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"a9e35eae-e3e4-43df-83fb-4a2233406e73\") " pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.343486 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f399c745-4f4e-44e8-8813-af3861dc0eb0" (UID: "f399c745-4f4e-44e8-8813-af3861dc0eb0"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.360142 4710 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f399c745-4f4e-44e8-8813-af3861dc0eb0-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.360368 4710 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.361681 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.596588 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f399c745-4f4e-44e8-8813-af3861dc0eb0","Type":"ContainerDied","Data":"c9fed77c3bc7a8de4268d897e047b33d1360f45fd3facf4a3521ec31dcc9451c"} Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.597080 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.597110 4710 scope.go:117] "RemoveContainer" containerID="30e9673a2bbd342f419e56170fc3b2ad0e2baead63a1f7877b1373479afe4653" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.638633 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.641998 4710 scope.go:117] "RemoveContainer" containerID="a547221951088401addaed6821940f14517efca1a5c55afed29e17422d05f3b6" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.649449 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.693419 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.695393 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.697380 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.701216 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.701473 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-m6x6q" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.713099 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.713194 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.713423 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.713661 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.719417 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.768710 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769075 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769234 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192d1577-8f40-4d1b-bc83-a7cb9d88e388-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769306 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769341 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192d1577-8f40-4d1b-bc83-a7cb9d88e388-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769369 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769388 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769433 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769454 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769491 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.769607 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf4lg\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-kube-api-access-xf4lg\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872007 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192d1577-8f40-4d1b-bc83-a7cb9d88e388-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872087 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872110 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192d1577-8f40-4d1b-bc83-a7cb9d88e388-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872140 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872168 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872198 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872227 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872270 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872339 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf4lg\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-kube-api-access-xf4lg\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872346 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872390 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872511 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.872607 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.873056 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.873227 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.873485 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.873843 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.874150 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/192d1577-8f40-4d1b-bc83-a7cb9d88e388-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.878866 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/192d1577-8f40-4d1b-bc83-a7cb9d88e388-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.879055 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.880215 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.886213 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/192d1577-8f40-4d1b-bc83-a7cb9d88e388-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.937204 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:35 crc kubenswrapper[4710]: I1128 17:22:35.939391 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf4lg\" (UniqueName: \"kubernetes.io/projected/192d1577-8f40-4d1b-bc83-a7cb9d88e388-kube-api-access-xf4lg\") pod \"rabbitmq-cell1-server-0\" (UID: \"192d1577-8f40-4d1b-bc83-a7cb9d88e388\") " pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:36 crc kubenswrapper[4710]: I1128 17:22:36.017579 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:22:36 crc kubenswrapper[4710]: I1128 17:22:36.530060 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 28 17:22:36 crc kubenswrapper[4710]: I1128 17:22:36.613425 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e35eae-e3e4-43df-83fb-4a2233406e73","Type":"ContainerStarted","Data":"551ab2c2e3374b198fef12d617e5869611485074034ccb34ce7bccb4646a4012"} Nov 28 17:22:36 crc kubenswrapper[4710]: I1128 17:22:36.618918 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"192d1577-8f40-4d1b-bc83-a7cb9d88e388","Type":"ContainerStarted","Data":"1da751b826bc10d0b4e3560536b68ed9c94ac06feec87462ebb4812119c03406"} Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.156897 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f399c745-4f4e-44e8-8813-af3861dc0eb0" path="/var/lib/kubelet/pods/f399c745-4f4e-44e8-8813-af3861dc0eb0/volumes" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.159158 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-l2xhb"] Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.161123 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.163728 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.180627 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-l2xhb"] Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.305274 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.305349 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.305611 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-config\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.305981 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqqsv\" (UniqueName: \"kubernetes.io/projected/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-kube-api-access-lqqsv\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.306063 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.306125 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.306206 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-svc\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.409008 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-config\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.409117 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqqsv\" (UniqueName: \"kubernetes.io/projected/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-kube-api-access-lqqsv\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.409140 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.409162 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.409186 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-svc\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.409247 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.409267 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.410219 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-svc\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.410254 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-config\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.410240 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.410313 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.410407 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.410848 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.453790 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqqsv\" (UniqueName: \"kubernetes.io/projected/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-kube-api-access-lqqsv\") pod \"dnsmasq-dns-d558885bc-l2xhb\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:37 crc kubenswrapper[4710]: I1128 17:22:37.488249 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:38 crc kubenswrapper[4710]: I1128 17:22:38.139089 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-l2xhb"] Nov 28 17:22:38 crc kubenswrapper[4710]: I1128 17:22:38.648471 4710 generic.go:334] "Generic (PLEG): container finished" podID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" containerID="f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2" exitCode=0 Nov 28 17:22:38 crc kubenswrapper[4710]: I1128 17:22:38.648935 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" event={"ID":"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd","Type":"ContainerDied","Data":"f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2"} Nov 28 17:22:38 crc kubenswrapper[4710]: I1128 17:22:38.648968 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" event={"ID":"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd","Type":"ContainerStarted","Data":"3155126822d15dd4bfa0aea4ef49cff978c07a8e675e0b3afcf14c123efe4840"} Nov 28 17:22:38 crc kubenswrapper[4710]: I1128 17:22:38.654518 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"192d1577-8f40-4d1b-bc83-a7cb9d88e388","Type":"ContainerStarted","Data":"872a3b7b7a4660867dd3488094c963fb499ab227a7d2031ff3b3a19e2aac15c9"} Nov 28 17:22:38 crc kubenswrapper[4710]: I1128 17:22:38.657516 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e35eae-e3e4-43df-83fb-4a2233406e73","Type":"ContainerStarted","Data":"2840b1bd8b72d8bce729b6c178549bf3c4d44e5c6f728bb371eff6a2daad44b7"} Nov 28 17:22:39 crc kubenswrapper[4710]: I1128 17:22:39.670686 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" event={"ID":"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd","Type":"ContainerStarted","Data":"6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428"} Nov 28 17:22:40 crc kubenswrapper[4710]: I1128 17:22:40.682658 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.583050 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" podStartSLOduration=7.583034508 podStartE2EDuration="7.583034508s" podCreationTimestamp="2025-11-28 17:22:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:39.692458562 +0000 UTC m=+1448.950758597" watchObservedRunningTime="2025-11-28 17:22:44.583034508 +0000 UTC m=+1453.841334543" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.591600 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wdd24"] Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.594299 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.610380 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdd24"] Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.686409 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-utilities\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.686513 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-catalog-content\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.686709 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjn4\" (UniqueName: \"kubernetes.io/projected/90671143-b6a7-40fa-bff1-65d07e203fec-kube-api-access-npjn4\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.789119 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npjn4\" (UniqueName: \"kubernetes.io/projected/90671143-b6a7-40fa-bff1-65d07e203fec-kube-api-access-npjn4\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.789186 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-utilities\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.789256 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-catalog-content\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.789790 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-utilities\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.789872 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-catalog-content\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.811538 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npjn4\" (UniqueName: \"kubernetes.io/projected/90671143-b6a7-40fa-bff1-65d07e203fec-kube-api-access-npjn4\") pod \"redhat-operators-wdd24\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:44 crc kubenswrapper[4710]: I1128 17:22:44.917258 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:45 crc kubenswrapper[4710]: I1128 17:22:45.419521 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wdd24"] Nov 28 17:22:45 crc kubenswrapper[4710]: I1128 17:22:45.742525 4710 generic.go:334] "Generic (PLEG): container finished" podID="90671143-b6a7-40fa-bff1-65d07e203fec" containerID="793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b" exitCode=0 Nov 28 17:22:45 crc kubenswrapper[4710]: I1128 17:22:45.742589 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdd24" event={"ID":"90671143-b6a7-40fa-bff1-65d07e203fec","Type":"ContainerDied","Data":"793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b"} Nov 28 17:22:45 crc kubenswrapper[4710]: I1128 17:22:45.742873 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdd24" event={"ID":"90671143-b6a7-40fa-bff1-65d07e203fec","Type":"ContainerStarted","Data":"d68d6f05c30016fcd00d3dbffc8ab3c172fabd402d65e03a467cf0009905dc2c"} Nov 28 17:22:46 crc kubenswrapper[4710]: I1128 17:22:46.755458 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdd24" event={"ID":"90671143-b6a7-40fa-bff1-65d07e203fec","Type":"ContainerStarted","Data":"5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a"} Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.490998 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.561123 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-zrwtj"] Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.561332 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" podUID="57fb07c0-57b1-4950-b522-1a4b7462a841" containerName="dnsmasq-dns" containerID="cri-o://8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d" gracePeriod=10 Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.688284 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-k8mql"] Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.692982 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.702922 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-k8mql"] Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.761792 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.761858 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-config\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.761927 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.761960 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.761977 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.762005 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwhdn\" (UniqueName: \"kubernetes.io/projected/9d817523-77e3-415b-9606-89cfcede076e-kube-api-access-mwhdn\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.762032 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.864001 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.864075 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.864092 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.864124 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwhdn\" (UniqueName: \"kubernetes.io/projected/9d817523-77e3-415b-9606-89cfcede076e-kube-api-access-mwhdn\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.864155 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.864236 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.864336 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-config\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.865293 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-config\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.865579 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.866263 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.866893 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.867690 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.867879 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d817523-77e3-415b-9606-89cfcede076e-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:47 crc kubenswrapper[4710]: I1128 17:22:47.891239 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwhdn\" (UniqueName: \"kubernetes.io/projected/9d817523-77e3-415b-9606-89cfcede076e-kube-api-access-mwhdn\") pod \"dnsmasq-dns-78c64bc9c5-k8mql\" (UID: \"9d817523-77e3-415b-9606-89cfcede076e\") " pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.041821 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.525812 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-k8mql"] Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.609076 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.778445 4710 generic.go:334] "Generic (PLEG): container finished" podID="57fb07c0-57b1-4950-b522-1a4b7462a841" containerID="8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d" exitCode=0 Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.778555 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.778570 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" event={"ID":"57fb07c0-57b1-4950-b522-1a4b7462a841","Type":"ContainerDied","Data":"8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d"} Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.778885 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-zrwtj" event={"ID":"57fb07c0-57b1-4950-b522-1a4b7462a841","Type":"ContainerDied","Data":"eff1c5709d8972ff948fee19d4aadf74f86d1d517b2edbc4894885cf2ef9cde6"} Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.778904 4710 scope.go:117] "RemoveContainer" containerID="8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.781908 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" event={"ID":"9d817523-77e3-415b-9606-89cfcede076e","Type":"ContainerStarted","Data":"00a0ffbee2eb90c73a39d6eac6f06af7309de98be4238bcf08626c8929fb0d67"} Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.782451 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-sb\") pod \"57fb07c0-57b1-4950-b522-1a4b7462a841\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.782579 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-nb\") pod \"57fb07c0-57b1-4950-b522-1a4b7462a841\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.782622 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-swift-storage-0\") pod \"57fb07c0-57b1-4950-b522-1a4b7462a841\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.782687 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-config\") pod \"57fb07c0-57b1-4950-b522-1a4b7462a841\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.782773 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-svc\") pod \"57fb07c0-57b1-4950-b522-1a4b7462a841\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.782796 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl5w9\" (UniqueName: \"kubernetes.io/projected/57fb07c0-57b1-4950-b522-1a4b7462a841-kube-api-access-jl5w9\") pod \"57fb07c0-57b1-4950-b522-1a4b7462a841\" (UID: \"57fb07c0-57b1-4950-b522-1a4b7462a841\") " Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.785927 4710 generic.go:334] "Generic (PLEG): container finished" podID="90671143-b6a7-40fa-bff1-65d07e203fec" containerID="5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a" exitCode=0 Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.785959 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdd24" event={"ID":"90671143-b6a7-40fa-bff1-65d07e203fec","Type":"ContainerDied","Data":"5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a"} Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.789462 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fb07c0-57b1-4950-b522-1a4b7462a841-kube-api-access-jl5w9" (OuterVolumeSpecName: "kube-api-access-jl5w9") pod "57fb07c0-57b1-4950-b522-1a4b7462a841" (UID: "57fb07c0-57b1-4950-b522-1a4b7462a841"). InnerVolumeSpecName "kube-api-access-jl5w9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.821778 4710 scope.go:117] "RemoveContainer" containerID="2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.853988 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57fb07c0-57b1-4950-b522-1a4b7462a841" (UID: "57fb07c0-57b1-4950-b522-1a4b7462a841"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.860010 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-config" (OuterVolumeSpecName: "config") pod "57fb07c0-57b1-4950-b522-1a4b7462a841" (UID: "57fb07c0-57b1-4950-b522-1a4b7462a841"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.860921 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "57fb07c0-57b1-4950-b522-1a4b7462a841" (UID: "57fb07c0-57b1-4950-b522-1a4b7462a841"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.865911 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57fb07c0-57b1-4950-b522-1a4b7462a841" (UID: "57fb07c0-57b1-4950-b522-1a4b7462a841"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.867451 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57fb07c0-57b1-4950-b522-1a4b7462a841" (UID: "57fb07c0-57b1-4950-b522-1a4b7462a841"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.885979 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.886015 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.886026 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl5w9\" (UniqueName: \"kubernetes.io/projected/57fb07c0-57b1-4950-b522-1a4b7462a841-kube-api-access-jl5w9\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.886038 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.886047 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:48 crc kubenswrapper[4710]: I1128 17:22:48.886056 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/57fb07c0-57b1-4950-b522-1a4b7462a841-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.011503 4710 scope.go:117] "RemoveContainer" containerID="8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d" Nov 28 17:22:49 crc kubenswrapper[4710]: E1128 17:22:49.012050 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d\": container with ID starting with 8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d not found: ID does not exist" containerID="8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d" Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.012085 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d"} err="failed to get container status \"8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d\": rpc error: code = NotFound desc = could not find container \"8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d\": container with ID starting with 8824bf6d7bdfb5a22d773563d2bffd87b53f3487ce5fdc9a642ee82fc646963d not found: ID does not exist" Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.012111 4710 scope.go:117] "RemoveContainer" containerID="2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039" Nov 28 17:22:49 crc kubenswrapper[4710]: E1128 17:22:49.014081 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039\": container with ID starting with 2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039 not found: ID does not exist" containerID="2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039" Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.014109 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039"} err="failed to get container status \"2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039\": rpc error: code = NotFound desc = could not find container \"2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039\": container with ID starting with 2fd3ec888209c15eddd6c6c66880339b155b2849a1718259739934d607147039 not found: ID does not exist" Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.116182 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-zrwtj"] Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.128675 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-zrwtj"] Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.160642 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57fb07c0-57b1-4950-b522-1a4b7462a841" path="/var/lib/kubelet/pods/57fb07c0-57b1-4950-b522-1a4b7462a841/volumes" Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.798133 4710 generic.go:334] "Generic (PLEG): container finished" podID="9d817523-77e3-415b-9606-89cfcede076e" containerID="8fe4477d171a06447d4536e66bbc8ef4289cc8d179f5a88b9368d43d95eb1496" exitCode=0 Nov 28 17:22:49 crc kubenswrapper[4710]: I1128 17:22:49.798198 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" event={"ID":"9d817523-77e3-415b-9606-89cfcede076e","Type":"ContainerDied","Data":"8fe4477d171a06447d4536e66bbc8ef4289cc8d179f5a88b9368d43d95eb1496"} Nov 28 17:22:50 crc kubenswrapper[4710]: I1128 17:22:50.810725 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" event={"ID":"9d817523-77e3-415b-9606-89cfcede076e","Type":"ContainerStarted","Data":"db8f003f82cfed57676edf9d4e5ab712cd9d7cc49566509a8aefed4981a457ee"} Nov 28 17:22:50 crc kubenswrapper[4710]: I1128 17:22:50.810855 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:50 crc kubenswrapper[4710]: I1128 17:22:50.813956 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdd24" event={"ID":"90671143-b6a7-40fa-bff1-65d07e203fec","Type":"ContainerStarted","Data":"3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002"} Nov 28 17:22:50 crc kubenswrapper[4710]: I1128 17:22:50.844324 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" podStartSLOduration=3.844302395 podStartE2EDuration="3.844302395s" podCreationTimestamp="2025-11-28 17:22:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:22:50.830135159 +0000 UTC m=+1460.088435214" watchObservedRunningTime="2025-11-28 17:22:50.844302395 +0000 UTC m=+1460.102602450" Nov 28 17:22:50 crc kubenswrapper[4710]: I1128 17:22:50.849701 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wdd24" podStartSLOduration=2.191509164 podStartE2EDuration="6.849677604s" podCreationTimestamp="2025-11-28 17:22:44 +0000 UTC" firstStartedPulling="2025-11-28 17:22:45.744910775 +0000 UTC m=+1455.003210810" lastFinishedPulling="2025-11-28 17:22:50.403079205 +0000 UTC m=+1459.661379250" observedRunningTime="2025-11-28 17:22:50.846601587 +0000 UTC m=+1460.104901642" watchObservedRunningTime="2025-11-28 17:22:50.849677604 +0000 UTC m=+1460.107977649" Nov 28 17:22:54 crc kubenswrapper[4710]: I1128 17:22:54.917492 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:54 crc kubenswrapper[4710]: I1128 17:22:54.918108 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:22:55 crc kubenswrapper[4710]: I1128 17:22:55.965223 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wdd24" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="registry-server" probeResult="failure" output=< Nov 28 17:22:55 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:22:55 crc kubenswrapper[4710]: > Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.043988 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-k8mql" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.156324 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-l2xhb"] Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.156567 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" podUID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" containerName="dnsmasq-dns" containerID="cri-o://6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428" gracePeriod=10 Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.812843 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.900194 4710 generic.go:334] "Generic (PLEG): container finished" podID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" containerID="6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428" exitCode=0 Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.900241 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" event={"ID":"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd","Type":"ContainerDied","Data":"6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428"} Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.900255 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.900272 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-l2xhb" event={"ID":"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd","Type":"ContainerDied","Data":"3155126822d15dd4bfa0aea4ef49cff978c07a8e675e0b3afcf14c123efe4840"} Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.900289 4710 scope.go:117] "RemoveContainer" containerID="6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.926720 4710 scope.go:117] "RemoveContainer" containerID="f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.953906 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-sb\") pod \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.954004 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-swift-storage-0\") pod \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.954192 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-config\") pod \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.954262 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-openstack-edpm-ipam\") pod \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.954342 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqqsv\" (UniqueName: \"kubernetes.io/projected/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-kube-api-access-lqqsv\") pod \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.954433 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-nb\") pod \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.954501 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-svc\") pod \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\" (UID: \"d2f9c848-82c6-47c2-84bb-4d47a1b91cbd\") " Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.958916 4710 scope.go:117] "RemoveContainer" containerID="6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428" Nov 28 17:22:58 crc kubenswrapper[4710]: E1128 17:22:58.964910 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428\": container with ID starting with 6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428 not found: ID does not exist" containerID="6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.964975 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428"} err="failed to get container status \"6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428\": rpc error: code = NotFound desc = could not find container \"6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428\": container with ID starting with 6640e2044a978290b73e68c42229f6bb0d96b26b8cd6ad1a0269cee048f0a428 not found: ID does not exist" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.965010 4710 scope.go:117] "RemoveContainer" containerID="f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2" Nov 28 17:22:58 crc kubenswrapper[4710]: E1128 17:22:58.965581 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2\": container with ID starting with f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2 not found: ID does not exist" containerID="f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.965627 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2"} err="failed to get container status \"f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2\": rpc error: code = NotFound desc = could not find container \"f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2\": container with ID starting with f8c6ea7abba38f07cf14ec3a7068a022d1811ad831afefe9ae4ed6ba22db06b2 not found: ID does not exist" Nov 28 17:22:58 crc kubenswrapper[4710]: I1128 17:22:58.976028 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-kube-api-access-lqqsv" (OuterVolumeSpecName: "kube-api-access-lqqsv") pod "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" (UID: "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd"). InnerVolumeSpecName "kube-api-access-lqqsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.011515 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-config" (OuterVolumeSpecName: "config") pod "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" (UID: "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.013660 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" (UID: "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.013711 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" (UID: "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.016925 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" (UID: "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.026080 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" (UID: "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.039981 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" (UID: "d2f9c848-82c6-47c2-84bb-4d47a1b91cbd"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.057188 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqqsv\" (UniqueName: \"kubernetes.io/projected/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-kube-api-access-lqqsv\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.057238 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.057252 4710 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.057264 4710 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.057278 4710 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.057291 4710 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.057302 4710 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.227428 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-l2xhb"] Nov 28 17:22:59 crc kubenswrapper[4710]: I1128 17:22:59.236750 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-l2xhb"] Nov 28 17:23:01 crc kubenswrapper[4710]: I1128 17:23:01.156139 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" path="/var/lib/kubelet/pods/d2f9c848-82c6-47c2-84bb-4d47a1b91cbd/volumes" Nov 28 17:23:04 crc kubenswrapper[4710]: I1128 17:23:04.966037 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:23:05 crc kubenswrapper[4710]: I1128 17:23:05.027445 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:23:05 crc kubenswrapper[4710]: I1128 17:23:05.203865 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdd24"] Nov 28 17:23:06 crc kubenswrapper[4710]: I1128 17:23:06.983227 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wdd24" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="registry-server" containerID="cri-o://3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002" gracePeriod=2 Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.449784 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.641442 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-utilities\") pod \"90671143-b6a7-40fa-bff1-65d07e203fec\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.641619 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npjn4\" (UniqueName: \"kubernetes.io/projected/90671143-b6a7-40fa-bff1-65d07e203fec-kube-api-access-npjn4\") pod \"90671143-b6a7-40fa-bff1-65d07e203fec\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.641746 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-catalog-content\") pod \"90671143-b6a7-40fa-bff1-65d07e203fec\" (UID: \"90671143-b6a7-40fa-bff1-65d07e203fec\") " Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.642282 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-utilities" (OuterVolumeSpecName: "utilities") pod "90671143-b6a7-40fa-bff1-65d07e203fec" (UID: "90671143-b6a7-40fa-bff1-65d07e203fec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.648118 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90671143-b6a7-40fa-bff1-65d07e203fec-kube-api-access-npjn4" (OuterVolumeSpecName: "kube-api-access-npjn4") pod "90671143-b6a7-40fa-bff1-65d07e203fec" (UID: "90671143-b6a7-40fa-bff1-65d07e203fec"). InnerVolumeSpecName "kube-api-access-npjn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.744748 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.744806 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npjn4\" (UniqueName: \"kubernetes.io/projected/90671143-b6a7-40fa-bff1-65d07e203fec-kube-api-access-npjn4\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.752915 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90671143-b6a7-40fa-bff1-65d07e203fec" (UID: "90671143-b6a7-40fa-bff1-65d07e203fec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.846768 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90671143-b6a7-40fa-bff1-65d07e203fec-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.995248 4710 generic.go:334] "Generic (PLEG): container finished" podID="90671143-b6a7-40fa-bff1-65d07e203fec" containerID="3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002" exitCode=0 Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.995304 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdd24" event={"ID":"90671143-b6a7-40fa-bff1-65d07e203fec","Type":"ContainerDied","Data":"3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002"} Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.995333 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wdd24" Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.995365 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wdd24" event={"ID":"90671143-b6a7-40fa-bff1-65d07e203fec","Type":"ContainerDied","Data":"d68d6f05c30016fcd00d3dbffc8ab3c172fabd402d65e03a467cf0009905dc2c"} Nov 28 17:23:07 crc kubenswrapper[4710]: I1128 17:23:07.995391 4710 scope.go:117] "RemoveContainer" containerID="3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002" Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.022838 4710 scope.go:117] "RemoveContainer" containerID="5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a" Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.038192 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wdd24"] Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.050483 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wdd24"] Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.064373 4710 scope.go:117] "RemoveContainer" containerID="793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b" Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.095420 4710 scope.go:117] "RemoveContainer" containerID="3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002" Nov 28 17:23:08 crc kubenswrapper[4710]: E1128 17:23:08.096311 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002\": container with ID starting with 3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002 not found: ID does not exist" containerID="3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002" Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.096356 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002"} err="failed to get container status \"3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002\": rpc error: code = NotFound desc = could not find container \"3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002\": container with ID starting with 3901c1b758e8902814ad8da23a929f5c55ca216308a233cd2170f17845b06002 not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.096384 4710 scope.go:117] "RemoveContainer" containerID="5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a" Nov 28 17:23:08 crc kubenswrapper[4710]: E1128 17:23:08.096830 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a\": container with ID starting with 5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a not found: ID does not exist" containerID="5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a" Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.096866 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a"} err="failed to get container status \"5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a\": rpc error: code = NotFound desc = could not find container \"5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a\": container with ID starting with 5f43978dbd90ea0161bc7198365192bc8c2e5d4fcb3998ae17ed707920c9449a not found: ID does not exist" Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.096895 4710 scope.go:117] "RemoveContainer" containerID="793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b" Nov 28 17:23:08 crc kubenswrapper[4710]: E1128 17:23:08.097432 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b\": container with ID starting with 793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b not found: ID does not exist" containerID="793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b" Nov 28 17:23:08 crc kubenswrapper[4710]: I1128 17:23:08.097456 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b"} err="failed to get container status \"793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b\": rpc error: code = NotFound desc = could not find container \"793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b\": container with ID starting with 793932a68b1006352ecd8627c4e75ea8456333334fa0524c3465320d816de11b not found: ID does not exist" Nov 28 17:23:09 crc kubenswrapper[4710]: I1128 17:23:09.157178 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" path="/var/lib/kubelet/pods/90671143-b6a7-40fa-bff1-65d07e203fec/volumes" Nov 28 17:23:10 crc kubenswrapper[4710]: I1128 17:23:10.019812 4710 generic.go:334] "Generic (PLEG): container finished" podID="a9e35eae-e3e4-43df-83fb-4a2233406e73" containerID="2840b1bd8b72d8bce729b6c178549bf3c4d44e5c6f728bb371eff6a2daad44b7" exitCode=0 Nov 28 17:23:10 crc kubenswrapper[4710]: I1128 17:23:10.019920 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e35eae-e3e4-43df-83fb-4a2233406e73","Type":"ContainerDied","Data":"2840b1bd8b72d8bce729b6c178549bf3c4d44e5c6f728bb371eff6a2daad44b7"} Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.032793 4710 generic.go:334] "Generic (PLEG): container finished" podID="192d1577-8f40-4d1b-bc83-a7cb9d88e388" containerID="872a3b7b7a4660867dd3488094c963fb499ab227a7d2031ff3b3a19e2aac15c9" exitCode=0 Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.032908 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"192d1577-8f40-4d1b-bc83-a7cb9d88e388","Type":"ContainerDied","Data":"872a3b7b7a4660867dd3488094c963fb499ab227a7d2031ff3b3a19e2aac15c9"} Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.037587 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9e35eae-e3e4-43df-83fb-4a2233406e73","Type":"ContainerStarted","Data":"98001721b0a903695a939e72e99673d9662ecc3acad74bbd2f0de7121964fe1f"} Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.037907 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.097035 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.097011023 podStartE2EDuration="36.097011023s" podCreationTimestamp="2025-11-28 17:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:11.092053837 +0000 UTC m=+1480.350353982" watchObservedRunningTime="2025-11-28 17:23:11.097011023 +0000 UTC m=+1480.355311068" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.397274 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v"] Nov 28 17:23:11 crc kubenswrapper[4710]: E1128 17:23:11.398299 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fb07c0-57b1-4950-b522-1a4b7462a841" containerName="dnsmasq-dns" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.398388 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fb07c0-57b1-4950-b522-1a4b7462a841" containerName="dnsmasq-dns" Nov 28 17:23:11 crc kubenswrapper[4710]: E1128 17:23:11.398453 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" containerName="init" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.398502 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" containerName="init" Nov 28 17:23:11 crc kubenswrapper[4710]: E1128 17:23:11.398560 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="extract-content" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.398612 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="extract-content" Nov 28 17:23:11 crc kubenswrapper[4710]: E1128 17:23:11.398669 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="extract-utilities" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.398722 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="extract-utilities" Nov 28 17:23:11 crc kubenswrapper[4710]: E1128 17:23:11.398817 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="registry-server" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.398920 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="registry-server" Nov 28 17:23:11 crc kubenswrapper[4710]: E1128 17:23:11.399008 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fb07c0-57b1-4950-b522-1a4b7462a841" containerName="init" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.399084 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fb07c0-57b1-4950-b522-1a4b7462a841" containerName="init" Nov 28 17:23:11 crc kubenswrapper[4710]: E1128 17:23:11.399156 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" containerName="dnsmasq-dns" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.399208 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" containerName="dnsmasq-dns" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.399470 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="90671143-b6a7-40fa-bff1-65d07e203fec" containerName="registry-server" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.399539 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2f9c848-82c6-47c2-84bb-4d47a1b91cbd" containerName="dnsmasq-dns" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.399609 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fb07c0-57b1-4950-b522-1a4b7462a841" containerName="dnsmasq-dns" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.400619 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.402922 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.403377 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.403477 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.405722 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.413591 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v"] Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.460557 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.460604 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.460684 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jdq\" (UniqueName: \"kubernetes.io/projected/632b6913-e5ef-4e0a-8054-ba62795a3a32-kube-api-access-c7jdq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.460717 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.562706 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.562772 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.562875 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7jdq\" (UniqueName: \"kubernetes.io/projected/632b6913-e5ef-4e0a-8054-ba62795a3a32-kube-api-access-c7jdq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.562917 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.567788 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.569242 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.573456 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.580920 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7jdq\" (UniqueName: \"kubernetes.io/projected/632b6913-e5ef-4e0a-8054-ba62795a3a32-kube-api-access-c7jdq\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:11 crc kubenswrapper[4710]: I1128 17:23:11.771851 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:12 crc kubenswrapper[4710]: I1128 17:23:12.064813 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"192d1577-8f40-4d1b-bc83-a7cb9d88e388","Type":"ContainerStarted","Data":"a841ab2b4f0529b52bbecc9a671d4e13f5b8b91d7396f6a610e2bd52b7696b2f"} Nov 28 17:23:12 crc kubenswrapper[4710]: I1128 17:23:12.065980 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:23:12 crc kubenswrapper[4710]: I1128 17:23:12.123557 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.123527408 podStartE2EDuration="37.123527408s" podCreationTimestamp="2025-11-28 17:22:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:23:12.094274067 +0000 UTC m=+1481.352574112" watchObservedRunningTime="2025-11-28 17:23:12.123527408 +0000 UTC m=+1481.381827453" Nov 28 17:23:12 crc kubenswrapper[4710]: I1128 17:23:12.491258 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v"] Nov 28 17:23:13 crc kubenswrapper[4710]: I1128 17:23:13.074749 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" event={"ID":"632b6913-e5ef-4e0a-8054-ba62795a3a32","Type":"ContainerStarted","Data":"c6f5b91de2640f2a20b79d7eb321f2b5ee216959664ade651115646ede593fbd"} Nov 28 17:23:24 crc kubenswrapper[4710]: I1128 17:23:24.208159 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" event={"ID":"632b6913-e5ef-4e0a-8054-ba62795a3a32","Type":"ContainerStarted","Data":"d67131f316609c0212defe355f89051bdcb5bb0cedd0c6398a0ab2f230d39ef5"} Nov 28 17:23:24 crc kubenswrapper[4710]: I1128 17:23:24.235962 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" podStartSLOduration=2.518384808 podStartE2EDuration="13.235946299s" podCreationTimestamp="2025-11-28 17:23:11 +0000 UTC" firstStartedPulling="2025-11-28 17:23:12.493736472 +0000 UTC m=+1481.752036537" lastFinishedPulling="2025-11-28 17:23:23.211297983 +0000 UTC m=+1492.469598028" observedRunningTime="2025-11-28 17:23:24.230560179 +0000 UTC m=+1493.488860224" watchObservedRunningTime="2025-11-28 17:23:24.235946299 +0000 UTC m=+1493.494246344" Nov 28 17:23:25 crc kubenswrapper[4710]: I1128 17:23:25.365972 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 28 17:23:26 crc kubenswrapper[4710]: I1128 17:23:26.021972 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 28 17:23:35 crc kubenswrapper[4710]: I1128 17:23:35.324598 4710 generic.go:334] "Generic (PLEG): container finished" podID="632b6913-e5ef-4e0a-8054-ba62795a3a32" containerID="d67131f316609c0212defe355f89051bdcb5bb0cedd0c6398a0ab2f230d39ef5" exitCode=0 Nov 28 17:23:35 crc kubenswrapper[4710]: I1128 17:23:35.324689 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" event={"ID":"632b6913-e5ef-4e0a-8054-ba62795a3a32","Type":"ContainerDied","Data":"d67131f316609c0212defe355f89051bdcb5bb0cedd0c6398a0ab2f230d39ef5"} Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.774620 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.921439 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-repo-setup-combined-ca-bundle\") pod \"632b6913-e5ef-4e0a-8054-ba62795a3a32\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.921584 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-ssh-key\") pod \"632b6913-e5ef-4e0a-8054-ba62795a3a32\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.921672 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7jdq\" (UniqueName: \"kubernetes.io/projected/632b6913-e5ef-4e0a-8054-ba62795a3a32-kube-api-access-c7jdq\") pod \"632b6913-e5ef-4e0a-8054-ba62795a3a32\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.921804 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-inventory\") pod \"632b6913-e5ef-4e0a-8054-ba62795a3a32\" (UID: \"632b6913-e5ef-4e0a-8054-ba62795a3a32\") " Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.927121 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "632b6913-e5ef-4e0a-8054-ba62795a3a32" (UID: "632b6913-e5ef-4e0a-8054-ba62795a3a32"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.934240 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632b6913-e5ef-4e0a-8054-ba62795a3a32-kube-api-access-c7jdq" (OuterVolumeSpecName: "kube-api-access-c7jdq") pod "632b6913-e5ef-4e0a-8054-ba62795a3a32" (UID: "632b6913-e5ef-4e0a-8054-ba62795a3a32"). InnerVolumeSpecName "kube-api-access-c7jdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.957318 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "632b6913-e5ef-4e0a-8054-ba62795a3a32" (UID: "632b6913-e5ef-4e0a-8054-ba62795a3a32"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:36 crc kubenswrapper[4710]: I1128 17:23:36.961379 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-inventory" (OuterVolumeSpecName: "inventory") pod "632b6913-e5ef-4e0a-8054-ba62795a3a32" (UID: "632b6913-e5ef-4e0a-8054-ba62795a3a32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.024068 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7jdq\" (UniqueName: \"kubernetes.io/projected/632b6913-e5ef-4e0a-8054-ba62795a3a32-kube-api-access-c7jdq\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.024115 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.024126 4710 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.024136 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/632b6913-e5ef-4e0a-8054-ba62795a3a32-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.344686 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" event={"ID":"632b6913-e5ef-4e0a-8054-ba62795a3a32","Type":"ContainerDied","Data":"c6f5b91de2640f2a20b79d7eb321f2b5ee216959664ade651115646ede593fbd"} Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.344728 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6f5b91de2640f2a20b79d7eb321f2b5ee216959664ade651115646ede593fbd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.345033 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.437615 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd"] Nov 28 17:23:37 crc kubenswrapper[4710]: E1128 17:23:37.438173 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632b6913-e5ef-4e0a-8054-ba62795a3a32" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.438200 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="632b6913-e5ef-4e0a-8054-ba62795a3a32" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.438492 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="632b6913-e5ef-4e0a-8054-ba62795a3a32" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.439420 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.441517 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.441934 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.442025 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.442105 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.451451 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd"] Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.533732 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.534014 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdr2c\" (UniqueName: \"kubernetes.io/projected/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-kube-api-access-bdr2c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.534143 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.636271 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.636993 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.637108 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdr2c\" (UniqueName: \"kubernetes.io/projected/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-kube-api-access-bdr2c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.639795 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.648357 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.654924 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdr2c\" (UniqueName: \"kubernetes.io/projected/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-kube-api-access-bdr2c\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h8xmd\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:37 crc kubenswrapper[4710]: I1128 17:23:37.763379 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:38 crc kubenswrapper[4710]: W1128 17:23:38.400614 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac2e80b6_e6a4_4e45_bc6f_85c2425ff46e.slice/crio-aaa58123ceb9095e4aea76f8ecdca476d643962a6451243889e9d46aa079988e WatchSource:0}: Error finding container aaa58123ceb9095e4aea76f8ecdca476d643962a6451243889e9d46aa079988e: Status 404 returned error can't find the container with id aaa58123ceb9095e4aea76f8ecdca476d643962a6451243889e9d46aa079988e Nov 28 17:23:38 crc kubenswrapper[4710]: I1128 17:23:38.409619 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:23:38 crc kubenswrapper[4710]: I1128 17:23:38.427046 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd"] Nov 28 17:23:39 crc kubenswrapper[4710]: I1128 17:23:39.439980 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" event={"ID":"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e","Type":"ContainerStarted","Data":"8915a56d4a75f13b4142a9ee04105698a7b19297dca92afebd02777b39912332"} Nov 28 17:23:39 crc kubenswrapper[4710]: I1128 17:23:39.440499 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" event={"ID":"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e","Type":"ContainerStarted","Data":"aaa58123ceb9095e4aea76f8ecdca476d643962a6451243889e9d46aa079988e"} Nov 28 17:23:39 crc kubenswrapper[4710]: I1128 17:23:39.457001 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" podStartSLOduration=1.988491883 podStartE2EDuration="2.456979321s" podCreationTimestamp="2025-11-28 17:23:37 +0000 UTC" firstStartedPulling="2025-11-28 17:23:38.409170815 +0000 UTC m=+1507.667470860" lastFinishedPulling="2025-11-28 17:23:38.877658243 +0000 UTC m=+1508.135958298" observedRunningTime="2025-11-28 17:23:39.453580293 +0000 UTC m=+1508.711880358" watchObservedRunningTime="2025-11-28 17:23:39.456979321 +0000 UTC m=+1508.715279366" Nov 28 17:23:40 crc kubenswrapper[4710]: I1128 17:23:40.237838 4710 scope.go:117] "RemoveContainer" containerID="82e30b277816c509cbf159b8d022dcdb19ca69df8dd65c6a2d4237d41a279506" Nov 28 17:23:42 crc kubenswrapper[4710]: I1128 17:23:42.471072 4710 generic.go:334] "Generic (PLEG): container finished" podID="ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e" containerID="8915a56d4a75f13b4142a9ee04105698a7b19297dca92afebd02777b39912332" exitCode=0 Nov 28 17:23:42 crc kubenswrapper[4710]: I1128 17:23:42.471223 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" event={"ID":"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e","Type":"ContainerDied","Data":"8915a56d4a75f13b4142a9ee04105698a7b19297dca92afebd02777b39912332"} Nov 28 17:23:43 crc kubenswrapper[4710]: I1128 17:23:43.344010 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:23:43 crc kubenswrapper[4710]: I1128 17:23:43.344350 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:23:43 crc kubenswrapper[4710]: I1128 17:23:43.976775 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.073622 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdr2c\" (UniqueName: \"kubernetes.io/projected/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-kube-api-access-bdr2c\") pod \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.073886 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-inventory\") pod \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.073935 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-ssh-key\") pod \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\" (UID: \"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e\") " Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.079548 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-kube-api-access-bdr2c" (OuterVolumeSpecName: "kube-api-access-bdr2c") pod "ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e" (UID: "ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e"). InnerVolumeSpecName "kube-api-access-bdr2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.103502 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e" (UID: "ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.107998 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-inventory" (OuterVolumeSpecName: "inventory") pod "ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e" (UID: "ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.176401 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdr2c\" (UniqueName: \"kubernetes.io/projected/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-kube-api-access-bdr2c\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.176441 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.176453 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.499439 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" event={"ID":"ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e","Type":"ContainerDied","Data":"aaa58123ceb9095e4aea76f8ecdca476d643962a6451243889e9d46aa079988e"} Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.499481 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h8xmd" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.499490 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaa58123ceb9095e4aea76f8ecdca476d643962a6451243889e9d46aa079988e" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.570820 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b"] Nov 28 17:23:44 crc kubenswrapper[4710]: E1128 17:23:44.571322 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.571341 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.571555 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.572320 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.574878 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.576528 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.578512 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.578825 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.581067 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b"] Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.698771 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vr9\" (UniqueName: \"kubernetes.io/projected/24989137-409c-4abb-96da-a28e2382b122-kube-api-access-c6vr9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.698868 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.698916 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.699062 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.806058 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vr9\" (UniqueName: \"kubernetes.io/projected/24989137-409c-4abb-96da-a28e2382b122-kube-api-access-c6vr9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.806425 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.806550 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.806713 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.810340 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.810959 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.811096 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.824303 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vr9\" (UniqueName: \"kubernetes.io/projected/24989137-409c-4abb-96da-a28e2382b122-kube-api-access-c6vr9\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:44 crc kubenswrapper[4710]: I1128 17:23:44.901295 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:23:45 crc kubenswrapper[4710]: I1128 17:23:45.429273 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b"] Nov 28 17:23:45 crc kubenswrapper[4710]: I1128 17:23:45.509936 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" event={"ID":"24989137-409c-4abb-96da-a28e2382b122","Type":"ContainerStarted","Data":"db4e3a5d048ddca6d525be7c2b40ba8459767c01f19e2e603479dd0a55acab62"} Nov 28 17:23:46 crc kubenswrapper[4710]: I1128 17:23:46.522972 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" event={"ID":"24989137-409c-4abb-96da-a28e2382b122","Type":"ContainerStarted","Data":"6794ba9cbd4177726e6d01e3701be8f4ab626f8c32b645ff039d6737389da13d"} Nov 28 17:23:46 crc kubenswrapper[4710]: I1128 17:23:46.541191 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" podStartSLOduration=1.917158887 podStartE2EDuration="2.541164471s" podCreationTimestamp="2025-11-28 17:23:44 +0000 UTC" firstStartedPulling="2025-11-28 17:23:45.430361143 +0000 UTC m=+1514.688661188" lastFinishedPulling="2025-11-28 17:23:46.054366717 +0000 UTC m=+1515.312666772" observedRunningTime="2025-11-28 17:23:46.537200266 +0000 UTC m=+1515.795500321" watchObservedRunningTime="2025-11-28 17:23:46.541164471 +0000 UTC m=+1515.799464516" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.678518 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sknw8"] Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.682116 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.693363 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sknw8"] Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.871563 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852a614a-5a2b-4e2b-8946-13ad235093fc-catalog-content\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.871627 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk9bj\" (UniqueName: \"kubernetes.io/projected/852a614a-5a2b-4e2b-8946-13ad235093fc-kube-api-access-zk9bj\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.871672 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852a614a-5a2b-4e2b-8946-13ad235093fc-utilities\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.973439 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852a614a-5a2b-4e2b-8946-13ad235093fc-catalog-content\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.973513 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk9bj\" (UniqueName: \"kubernetes.io/projected/852a614a-5a2b-4e2b-8946-13ad235093fc-kube-api-access-zk9bj\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.973546 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852a614a-5a2b-4e2b-8946-13ad235093fc-utilities\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.973980 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852a614a-5a2b-4e2b-8946-13ad235093fc-catalog-content\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:10 crc kubenswrapper[4710]: I1128 17:24:10.974087 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852a614a-5a2b-4e2b-8946-13ad235093fc-utilities\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:11 crc kubenswrapper[4710]: I1128 17:24:11.003997 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk9bj\" (UniqueName: \"kubernetes.io/projected/852a614a-5a2b-4e2b-8946-13ad235093fc-kube-api-access-zk9bj\") pod \"certified-operators-sknw8\" (UID: \"852a614a-5a2b-4e2b-8946-13ad235093fc\") " pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:11 crc kubenswrapper[4710]: I1128 17:24:11.302778 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:11 crc kubenswrapper[4710]: I1128 17:24:11.762000 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sknw8"] Nov 28 17:24:11 crc kubenswrapper[4710]: W1128 17:24:11.773642 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod852a614a_5a2b_4e2b_8946_13ad235093fc.slice/crio-26c63fab6c8c05fec1e1073fb0f455753cdfd1d87e226680aaf3bd4a150165f4 WatchSource:0}: Error finding container 26c63fab6c8c05fec1e1073fb0f455753cdfd1d87e226680aaf3bd4a150165f4: Status 404 returned error can't find the container with id 26c63fab6c8c05fec1e1073fb0f455753cdfd1d87e226680aaf3bd4a150165f4 Nov 28 17:24:12 crc kubenswrapper[4710]: I1128 17:24:12.798833 4710 generic.go:334] "Generic (PLEG): container finished" podID="852a614a-5a2b-4e2b-8946-13ad235093fc" containerID="ac44dabd4bbb45b2bb92f631bcace0e99bfe1002c2f89d807ee8fa718869e9f8" exitCode=0 Nov 28 17:24:12 crc kubenswrapper[4710]: I1128 17:24:12.798937 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sknw8" event={"ID":"852a614a-5a2b-4e2b-8946-13ad235093fc","Type":"ContainerDied","Data":"ac44dabd4bbb45b2bb92f631bcace0e99bfe1002c2f89d807ee8fa718869e9f8"} Nov 28 17:24:12 crc kubenswrapper[4710]: I1128 17:24:12.799335 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sknw8" event={"ID":"852a614a-5a2b-4e2b-8946-13ad235093fc","Type":"ContainerStarted","Data":"26c63fab6c8c05fec1e1073fb0f455753cdfd1d87e226680aaf3bd4a150165f4"} Nov 28 17:24:13 crc kubenswrapper[4710]: I1128 17:24:13.343506 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:24:13 crc kubenswrapper[4710]: I1128 17:24:13.343573 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:24:18 crc kubenswrapper[4710]: I1128 17:24:18.865536 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sknw8" event={"ID":"852a614a-5a2b-4e2b-8946-13ad235093fc","Type":"ContainerStarted","Data":"0721ac1af30a99425634099242d14ab47396fa712f170917815ee6b9d553ff81"} Nov 28 17:24:19 crc kubenswrapper[4710]: I1128 17:24:19.879280 4710 generic.go:334] "Generic (PLEG): container finished" podID="852a614a-5a2b-4e2b-8946-13ad235093fc" containerID="0721ac1af30a99425634099242d14ab47396fa712f170917815ee6b9d553ff81" exitCode=0 Nov 28 17:24:19 crc kubenswrapper[4710]: I1128 17:24:19.879339 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sknw8" event={"ID":"852a614a-5a2b-4e2b-8946-13ad235093fc","Type":"ContainerDied","Data":"0721ac1af30a99425634099242d14ab47396fa712f170917815ee6b9d553ff81"} Nov 28 17:24:21 crc kubenswrapper[4710]: I1128 17:24:21.905207 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sknw8" event={"ID":"852a614a-5a2b-4e2b-8946-13ad235093fc","Type":"ContainerStarted","Data":"acbfd0cb82515a4b2f7b3891ad1fbe7a9d74ffc063cb7e69a72b0dda06b5952b"} Nov 28 17:24:21 crc kubenswrapper[4710]: I1128 17:24:21.930537 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sknw8" podStartSLOduration=4.353777957 podStartE2EDuration="11.930518225s" podCreationTimestamp="2025-11-28 17:24:10 +0000 UTC" firstStartedPulling="2025-11-28 17:24:12.800743277 +0000 UTC m=+1542.059043322" lastFinishedPulling="2025-11-28 17:24:20.377483545 +0000 UTC m=+1549.635783590" observedRunningTime="2025-11-28 17:24:21.925943471 +0000 UTC m=+1551.184243526" watchObservedRunningTime="2025-11-28 17:24:21.930518225 +0000 UTC m=+1551.188818270" Nov 28 17:24:31 crc kubenswrapper[4710]: I1128 17:24:31.303244 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:31 crc kubenswrapper[4710]: I1128 17:24:31.306621 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:31 crc kubenswrapper[4710]: I1128 17:24:31.352659 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.074960 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sknw8" Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.179362 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sknw8"] Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.214159 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4w9z"] Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.214467 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c4w9z" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerName="registry-server" containerID="cri-o://fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2" gracePeriod=2 Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.832359 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.964190 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-catalog-content\") pod \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.964246 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8s92\" (UniqueName: \"kubernetes.io/projected/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-kube-api-access-r8s92\") pod \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.964285 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-utilities\") pod \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\" (UID: \"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0\") " Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.976512 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-kube-api-access-r8s92" (OuterVolumeSpecName: "kube-api-access-r8s92") pod "89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" (UID: "89df42e9-55bb-4ac9-b1b9-57f42b7e62c0"). InnerVolumeSpecName "kube-api-access-r8s92". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:32 crc kubenswrapper[4710]: I1128 17:24:32.986225 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-utilities" (OuterVolumeSpecName: "utilities") pod "89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" (UID: "89df42e9-55bb-4ac9-b1b9-57f42b7e62c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.024076 4710 generic.go:334] "Generic (PLEG): container finished" podID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerID="fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2" exitCode=0 Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.024132 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4w9z" event={"ID":"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0","Type":"ContainerDied","Data":"fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2"} Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.024483 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c4w9z" event={"ID":"89df42e9-55bb-4ac9-b1b9-57f42b7e62c0","Type":"ContainerDied","Data":"be596a2555a618ad082d84fee414227d50be4060901c231bf06cb44e826a3499"} Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.024503 4710 scope.go:117] "RemoveContainer" containerID="fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.024140 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c4w9z" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.050520 4710 scope.go:117] "RemoveContainer" containerID="7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.056682 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" (UID: "89df42e9-55bb-4ac9-b1b9-57f42b7e62c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.067975 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.068027 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8s92\" (UniqueName: \"kubernetes.io/projected/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-kube-api-access-r8s92\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.068043 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.097448 4710 scope.go:117] "RemoveContainer" containerID="cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.135208 4710 scope.go:117] "RemoveContainer" containerID="fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2" Nov 28 17:24:33 crc kubenswrapper[4710]: E1128 17:24:33.135795 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2\": container with ID starting with fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2 not found: ID does not exist" containerID="fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.135827 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2"} err="failed to get container status \"fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2\": rpc error: code = NotFound desc = could not find container \"fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2\": container with ID starting with fb90ffc420e2e2a1e45a85693b53591829b6dbf26071d084573aa5fd42d60ea2 not found: ID does not exist" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.135865 4710 scope.go:117] "RemoveContainer" containerID="7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5" Nov 28 17:24:33 crc kubenswrapper[4710]: E1128 17:24:33.136208 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5\": container with ID starting with 7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5 not found: ID does not exist" containerID="7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.136227 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5"} err="failed to get container status \"7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5\": rpc error: code = NotFound desc = could not find container \"7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5\": container with ID starting with 7650d4f793a8c65eb888c1f866b9d0c62e7a3b877ca4215f14f17ec8cb819dd5 not found: ID does not exist" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.136262 4710 scope.go:117] "RemoveContainer" containerID="cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6" Nov 28 17:24:33 crc kubenswrapper[4710]: E1128 17:24:33.136707 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6\": container with ID starting with cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6 not found: ID does not exist" containerID="cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.136749 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6"} err="failed to get container status \"cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6\": rpc error: code = NotFound desc = could not find container \"cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6\": container with ID starting with cf0f1191a399a09170baad53fce509733c9f1cbc88a83a1dbcb89e25cd9840f6 not found: ID does not exist" Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.410803 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c4w9z"] Nov 28 17:24:33 crc kubenswrapper[4710]: I1128 17:24:33.423482 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c4w9z"] Nov 28 17:24:35 crc kubenswrapper[4710]: I1128 17:24:35.161519 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" path="/var/lib/kubelet/pods/89df42e9-55bb-4ac9-b1b9-57f42b7e62c0/volumes" Nov 28 17:24:40 crc kubenswrapper[4710]: I1128 17:24:40.348789 4710 scope.go:117] "RemoveContainer" containerID="670968ed8d0ca14e5820522e131f1e9115dfbdd62f7f6b6cd1010a5b9df4d3fc" Nov 28 17:24:40 crc kubenswrapper[4710]: I1128 17:24:40.376495 4710 scope.go:117] "RemoveContainer" containerID="2429445f978de6ee97187b91c0c20a14eb1a3415cd961c14bbfe88858358d74f" Nov 28 17:24:40 crc kubenswrapper[4710]: I1128 17:24:40.438891 4710 scope.go:117] "RemoveContainer" containerID="1e104a94e96fd6e373c1e5a2cf49ff3a0548c868aee72bf03019b9e0ee881603" Nov 28 17:24:43 crc kubenswrapper[4710]: I1128 17:24:43.343706 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:24:43 crc kubenswrapper[4710]: I1128 17:24:43.344248 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:24:43 crc kubenswrapper[4710]: I1128 17:24:43.344298 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:24:43 crc kubenswrapper[4710]: I1128 17:24:43.345091 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:24:43 crc kubenswrapper[4710]: I1128 17:24:43.345145 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" gracePeriod=600 Nov 28 17:24:43 crc kubenswrapper[4710]: E1128 17:24:43.468711 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.150045 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" exitCode=0 Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.150102 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33"} Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.150179 4710 scope.go:117] "RemoveContainer" containerID="21fd4e025722a9602a1e946aa30e2ca8c2a97b408a56cd641a1c9d99fc13a61e" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.150797 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:24:44 crc kubenswrapper[4710]: E1128 17:24:44.151204 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.972968 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Nov 28 17:24:44 crc kubenswrapper[4710]: E1128 17:24:44.973513 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerName="registry-server" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.973527 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerName="registry-server" Nov 28 17:24:44 crc kubenswrapper[4710]: E1128 17:24:44.973547 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerName="extract-content" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.973554 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerName="extract-content" Nov 28 17:24:44 crc kubenswrapper[4710]: E1128 17:24:44.973579 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerName="extract-utilities" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.973586 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerName="extract-utilities" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.973818 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="89df42e9-55bb-4ac9-b1b9-57f42b7e62c0" containerName="registry-server" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.975375 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:44 crc kubenswrapper[4710]: I1128 17:24:44.993572 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.136701 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mpkh\" (UniqueName: \"kubernetes.io/projected/6c50b56e-28e9-4176-b669-70c074baa068-kube-api-access-5mpkh\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.137227 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-catalog-content\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.137503 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-utilities\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.239654 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mpkh\" (UniqueName: \"kubernetes.io/projected/6c50b56e-28e9-4176-b669-70c074baa068-kube-api-access-5mpkh\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.239743 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-catalog-content\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.239887 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-utilities\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.240282 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-catalog-content\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.240390 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-utilities\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.261018 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mpkh\" (UniqueName: \"kubernetes.io/projected/6c50b56e-28e9-4176-b669-70c074baa068-kube-api-access-5mpkh\") pod \"redhat-marketplace-p4b9d\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.304687 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:45 crc kubenswrapper[4710]: I1128 17:24:45.754507 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Nov 28 17:24:46 crc kubenswrapper[4710]: I1128 17:24:46.174947 4710 generic.go:334] "Generic (PLEG): container finished" podID="6c50b56e-28e9-4176-b669-70c074baa068" containerID="5cffd73c49b46daed424089209381ad85a40f3c3dd8bcb19bc5f2d9133477086" exitCode=0 Nov 28 17:24:46 crc kubenswrapper[4710]: I1128 17:24:46.175031 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"6c50b56e-28e9-4176-b669-70c074baa068","Type":"ContainerDied","Data":"5cffd73c49b46daed424089209381ad85a40f3c3dd8bcb19bc5f2d9133477086"} Nov 28 17:24:46 crc kubenswrapper[4710]: I1128 17:24:46.175147 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"6c50b56e-28e9-4176-b669-70c074baa068","Type":"ContainerStarted","Data":"0b8b5d075a90a7dc17bc22d3b1ffd36f8b33ada89842762565de35172793d952"} Nov 28 17:24:47 crc kubenswrapper[4710]: I1128 17:24:47.193808 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"6c50b56e-28e9-4176-b669-70c074baa068","Type":"ContainerStarted","Data":"9237cdb97ae0db7e465dedd228d5d54f1a2a7fda7a9ae144f9fcf61c61b765ea"} Nov 28 17:24:48 crc kubenswrapper[4710]: I1128 17:24:48.204961 4710 generic.go:334] "Generic (PLEG): container finished" podID="6c50b56e-28e9-4176-b669-70c074baa068" containerID="9237cdb97ae0db7e465dedd228d5d54f1a2a7fda7a9ae144f9fcf61c61b765ea" exitCode=0 Nov 28 17:24:48 crc kubenswrapper[4710]: I1128 17:24:48.205150 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"6c50b56e-28e9-4176-b669-70c074baa068","Type":"ContainerDied","Data":"9237cdb97ae0db7e465dedd228d5d54f1a2a7fda7a9ae144f9fcf61c61b765ea"} Nov 28 17:24:49 crc kubenswrapper[4710]: I1128 17:24:49.216349 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"6c50b56e-28e9-4176-b669-70c074baa068","Type":"ContainerStarted","Data":"a0147473297e95a351cc7287c8ae9ce74d6452ba010ec45a6ee96cfadf2b0987"} Nov 28 17:24:49 crc kubenswrapper[4710]: I1128 17:24:49.244694 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p4b9d" podStartSLOduration=2.790439089 podStartE2EDuration="5.244672479s" podCreationTimestamp="2025-11-28 17:24:44 +0000 UTC" firstStartedPulling="2025-11-28 17:24:46.177245285 +0000 UTC m=+1575.435545330" lastFinishedPulling="2025-11-28 17:24:48.631478675 +0000 UTC m=+1577.889778720" observedRunningTime="2025-11-28 17:24:49.233915689 +0000 UTC m=+1578.492215754" watchObservedRunningTime="2025-11-28 17:24:49.244672479 +0000 UTC m=+1578.502972524" Nov 28 17:24:55 crc kubenswrapper[4710]: I1128 17:24:55.305707 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:55 crc kubenswrapper[4710]: I1128 17:24:55.306682 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:55 crc kubenswrapper[4710]: I1128 17:24:55.352659 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:56 crc kubenswrapper[4710]: I1128 17:24:56.342959 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:56 crc kubenswrapper[4710]: I1128 17:24:56.408818 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Nov 28 17:24:57 crc kubenswrapper[4710]: I1128 17:24:57.141608 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:24:57 crc kubenswrapper[4710]: E1128 17:24:57.142189 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:24:58 crc kubenswrapper[4710]: I1128 17:24:58.310082 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p4b9d" podUID="6c50b56e-28e9-4176-b669-70c074baa068" containerName="registry-server" containerID="cri-o://a0147473297e95a351cc7287c8ae9ce74d6452ba010ec45a6ee96cfadf2b0987" gracePeriod=2 Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.321783 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"6c50b56e-28e9-4176-b669-70c074baa068","Type":"ContainerDied","Data":"a0147473297e95a351cc7287c8ae9ce74d6452ba010ec45a6ee96cfadf2b0987"} Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.322078 4710 generic.go:334] "Generic (PLEG): container finished" podID="6c50b56e-28e9-4176-b669-70c074baa068" containerID="a0147473297e95a351cc7287c8ae9ce74d6452ba010ec45a6ee96cfadf2b0987" exitCode=0 Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.322112 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p4b9d" event={"ID":"6c50b56e-28e9-4176-b669-70c074baa068","Type":"ContainerDied","Data":"0b8b5d075a90a7dc17bc22d3b1ffd36f8b33ada89842762565de35172793d952"} Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.322125 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b8b5d075a90a7dc17bc22d3b1ffd36f8b33ada89842762565de35172793d952" Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.338000 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.434738 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mpkh\" (UniqueName: \"kubernetes.io/projected/6c50b56e-28e9-4176-b669-70c074baa068-kube-api-access-5mpkh\") pod \"6c50b56e-28e9-4176-b669-70c074baa068\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.435011 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-catalog-content\") pod \"6c50b56e-28e9-4176-b669-70c074baa068\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.435063 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-utilities\") pod \"6c50b56e-28e9-4176-b669-70c074baa068\" (UID: \"6c50b56e-28e9-4176-b669-70c074baa068\") " Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.436357 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-utilities" (OuterVolumeSpecName: "utilities") pod "6c50b56e-28e9-4176-b669-70c074baa068" (UID: "6c50b56e-28e9-4176-b669-70c074baa068"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.443561 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c50b56e-28e9-4176-b669-70c074baa068-kube-api-access-5mpkh" (OuterVolumeSpecName: "kube-api-access-5mpkh") pod "6c50b56e-28e9-4176-b669-70c074baa068" (UID: "6c50b56e-28e9-4176-b669-70c074baa068"). InnerVolumeSpecName "kube-api-access-5mpkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.453175 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c50b56e-28e9-4176-b669-70c074baa068" (UID: "6c50b56e-28e9-4176-b669-70c074baa068"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.538311 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mpkh\" (UniqueName: \"kubernetes.io/projected/6c50b56e-28e9-4176-b669-70c074baa068-kube-api-access-5mpkh\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.538365 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:24:59 crc kubenswrapper[4710]: I1128 17:24:59.538384 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c50b56e-28e9-4176-b669-70c074baa068-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:00 crc kubenswrapper[4710]: I1128 17:25:00.332123 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p4b9d" Nov 28 17:25:00 crc kubenswrapper[4710]: I1128 17:25:00.370813 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Nov 28 17:25:00 crc kubenswrapper[4710]: I1128 17:25:00.385470 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p4b9d"] Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.181246 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c50b56e-28e9-4176-b669-70c074baa068" path="/var/lib/kubelet/pods/6c50b56e-28e9-4176-b669-70c074baa068/volumes" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.585540 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rfk9t"] Nov 28 17:25:01 crc kubenswrapper[4710]: E1128 17:25:01.586052 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c50b56e-28e9-4176-b669-70c074baa068" containerName="extract-utilities" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.586065 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c50b56e-28e9-4176-b669-70c074baa068" containerName="extract-utilities" Nov 28 17:25:01 crc kubenswrapper[4710]: E1128 17:25:01.586097 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c50b56e-28e9-4176-b669-70c074baa068" containerName="registry-server" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.586104 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c50b56e-28e9-4176-b669-70c074baa068" containerName="registry-server" Nov 28 17:25:01 crc kubenswrapper[4710]: E1128 17:25:01.586119 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c50b56e-28e9-4176-b669-70c074baa068" containerName="extract-content" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.586125 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c50b56e-28e9-4176-b669-70c074baa068" containerName="extract-content" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.586338 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c50b56e-28e9-4176-b669-70c074baa068" containerName="registry-server" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.588114 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.608888 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfk9t"] Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.684600 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7vjt\" (UniqueName: \"kubernetes.io/projected/5263d384-33b1-4e7e-8377-f5e0ebf04372-kube-api-access-j7vjt\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.684995 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-catalog-content\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.685081 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-utilities\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.787333 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7vjt\" (UniqueName: \"kubernetes.io/projected/5263d384-33b1-4e7e-8377-f5e0ebf04372-kube-api-access-j7vjt\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.787495 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-catalog-content\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.787522 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-utilities\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.788039 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-catalog-content\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.788080 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-utilities\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.817005 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7vjt\" (UniqueName: \"kubernetes.io/projected/5263d384-33b1-4e7e-8377-f5e0ebf04372-kube-api-access-j7vjt\") pod \"community-operators-rfk9t\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:01 crc kubenswrapper[4710]: I1128 17:25:01.909769 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:02 crc kubenswrapper[4710]: I1128 17:25:02.525554 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfk9t"] Nov 28 17:25:03 crc kubenswrapper[4710]: I1128 17:25:03.372369 4710 generic.go:334] "Generic (PLEG): container finished" podID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerID="f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98" exitCode=0 Nov 28 17:25:03 crc kubenswrapper[4710]: I1128 17:25:03.372436 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfk9t" event={"ID":"5263d384-33b1-4e7e-8377-f5e0ebf04372","Type":"ContainerDied","Data":"f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98"} Nov 28 17:25:03 crc kubenswrapper[4710]: I1128 17:25:03.372472 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfk9t" event={"ID":"5263d384-33b1-4e7e-8377-f5e0ebf04372","Type":"ContainerStarted","Data":"d33b32f3ecbb292f357be7d913633a157690ccd3a067fe932d4beb74fadeb78b"} Nov 28 17:25:05 crc kubenswrapper[4710]: I1128 17:25:05.395413 4710 generic.go:334] "Generic (PLEG): container finished" podID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerID="50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870" exitCode=0 Nov 28 17:25:05 crc kubenswrapper[4710]: I1128 17:25:05.395452 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfk9t" event={"ID":"5263d384-33b1-4e7e-8377-f5e0ebf04372","Type":"ContainerDied","Data":"50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870"} Nov 28 17:25:07 crc kubenswrapper[4710]: I1128 17:25:07.421802 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfk9t" event={"ID":"5263d384-33b1-4e7e-8377-f5e0ebf04372","Type":"ContainerStarted","Data":"b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975"} Nov 28 17:25:07 crc kubenswrapper[4710]: I1128 17:25:07.450394 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rfk9t" podStartSLOduration=3.58069388 podStartE2EDuration="6.450369258s" podCreationTimestamp="2025-11-28 17:25:01 +0000 UTC" firstStartedPulling="2025-11-28 17:25:03.374589931 +0000 UTC m=+1592.632889986" lastFinishedPulling="2025-11-28 17:25:06.244265319 +0000 UTC m=+1595.502565364" observedRunningTime="2025-11-28 17:25:07.437354858 +0000 UTC m=+1596.695654893" watchObservedRunningTime="2025-11-28 17:25:07.450369258 +0000 UTC m=+1596.708669303" Nov 28 17:25:10 crc kubenswrapper[4710]: I1128 17:25:10.141674 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:25:10 crc kubenswrapper[4710]: E1128 17:25:10.142273 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:25:11 crc kubenswrapper[4710]: I1128 17:25:11.910933 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:11 crc kubenswrapper[4710]: I1128 17:25:11.911325 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:11 crc kubenswrapper[4710]: I1128 17:25:11.991561 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:12 crc kubenswrapper[4710]: I1128 17:25:12.532743 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:12 crc kubenswrapper[4710]: I1128 17:25:12.595623 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rfk9t"] Nov 28 17:25:14 crc kubenswrapper[4710]: I1128 17:25:14.497874 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rfk9t" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerName="registry-server" containerID="cri-o://b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975" gracePeriod=2 Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.003037 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.086070 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-catalog-content\") pod \"5263d384-33b1-4e7e-8377-f5e0ebf04372\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.086158 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7vjt\" (UniqueName: \"kubernetes.io/projected/5263d384-33b1-4e7e-8377-f5e0ebf04372-kube-api-access-j7vjt\") pod \"5263d384-33b1-4e7e-8377-f5e0ebf04372\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.086309 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-utilities\") pod \"5263d384-33b1-4e7e-8377-f5e0ebf04372\" (UID: \"5263d384-33b1-4e7e-8377-f5e0ebf04372\") " Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.087384 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-utilities" (OuterVolumeSpecName: "utilities") pod "5263d384-33b1-4e7e-8377-f5e0ebf04372" (UID: "5263d384-33b1-4e7e-8377-f5e0ebf04372"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.092725 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5263d384-33b1-4e7e-8377-f5e0ebf04372-kube-api-access-j7vjt" (OuterVolumeSpecName: "kube-api-access-j7vjt") pod "5263d384-33b1-4e7e-8377-f5e0ebf04372" (UID: "5263d384-33b1-4e7e-8377-f5e0ebf04372"). InnerVolumeSpecName "kube-api-access-j7vjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.146718 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5263d384-33b1-4e7e-8377-f5e0ebf04372" (UID: "5263d384-33b1-4e7e-8377-f5e0ebf04372"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.189190 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.189220 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7vjt\" (UniqueName: \"kubernetes.io/projected/5263d384-33b1-4e7e-8377-f5e0ebf04372-kube-api-access-j7vjt\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.189230 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5263d384-33b1-4e7e-8377-f5e0ebf04372-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.513133 4710 generic.go:334] "Generic (PLEG): container finished" podID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerID="b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975" exitCode=0 Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.513187 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfk9t" event={"ID":"5263d384-33b1-4e7e-8377-f5e0ebf04372","Type":"ContainerDied","Data":"b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975"} Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.513224 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfk9t" event={"ID":"5263d384-33b1-4e7e-8377-f5e0ebf04372","Type":"ContainerDied","Data":"d33b32f3ecbb292f357be7d913633a157690ccd3a067fe932d4beb74fadeb78b"} Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.513248 4710 scope.go:117] "RemoveContainer" containerID="b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.513193 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfk9t" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.540597 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rfk9t"] Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.544105 4710 scope.go:117] "RemoveContainer" containerID="50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.556579 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rfk9t"] Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.586675 4710 scope.go:117] "RemoveContainer" containerID="f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.644608 4710 scope.go:117] "RemoveContainer" containerID="b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975" Nov 28 17:25:15 crc kubenswrapper[4710]: E1128 17:25:15.645356 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975\": container with ID starting with b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975 not found: ID does not exist" containerID="b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.645414 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975"} err="failed to get container status \"b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975\": rpc error: code = NotFound desc = could not find container \"b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975\": container with ID starting with b34aee29774153492b13dd51a6d6cd0ae77a55765859a80c4b24f97a80889975 not found: ID does not exist" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.645451 4710 scope.go:117] "RemoveContainer" containerID="50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870" Nov 28 17:25:15 crc kubenswrapper[4710]: E1128 17:25:15.645970 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870\": container with ID starting with 50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870 not found: ID does not exist" containerID="50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.645993 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870"} err="failed to get container status \"50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870\": rpc error: code = NotFound desc = could not find container \"50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870\": container with ID starting with 50c3690eb897b15c02f6a7d2a37748bd812fd4137649e96a58ae91049893b870 not found: ID does not exist" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.646027 4710 scope.go:117] "RemoveContainer" containerID="f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98" Nov 28 17:25:15 crc kubenswrapper[4710]: E1128 17:25:15.646263 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98\": container with ID starting with f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98 not found: ID does not exist" containerID="f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98" Nov 28 17:25:15 crc kubenswrapper[4710]: I1128 17:25:15.646281 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98"} err="failed to get container status \"f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98\": rpc error: code = NotFound desc = could not find container \"f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98\": container with ID starting with f5e5a6650016a0e9cdb0e6598a0eb50a6eaf6052ffe81f439e09c65221ad9e98 not found: ID does not exist" Nov 28 17:25:17 crc kubenswrapper[4710]: I1128 17:25:17.158556 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" path="/var/lib/kubelet/pods/5263d384-33b1-4e7e-8377-f5e0ebf04372/volumes" Nov 28 17:25:25 crc kubenswrapper[4710]: I1128 17:25:25.141974 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:25:25 crc kubenswrapper[4710]: E1128 17:25:25.142841 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:25:39 crc kubenswrapper[4710]: I1128 17:25:39.142050 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:25:39 crc kubenswrapper[4710]: E1128 17:25:39.142861 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:25:54 crc kubenswrapper[4710]: I1128 17:25:54.141875 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:25:54 crc kubenswrapper[4710]: E1128 17:25:54.142923 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:26:07 crc kubenswrapper[4710]: I1128 17:26:07.143228 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:26:07 crc kubenswrapper[4710]: E1128 17:26:07.144164 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:26:22 crc kubenswrapper[4710]: I1128 17:26:22.141434 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:26:22 crc kubenswrapper[4710]: E1128 17:26:22.143075 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:26:36 crc kubenswrapper[4710]: I1128 17:26:36.142039 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:26:36 crc kubenswrapper[4710]: E1128 17:26:36.142855 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:26:48 crc kubenswrapper[4710]: I1128 17:26:48.532140 4710 generic.go:334] "Generic (PLEG): container finished" podID="24989137-409c-4abb-96da-a28e2382b122" containerID="6794ba9cbd4177726e6d01e3701be8f4ab626f8c32b645ff039d6737389da13d" exitCode=0 Nov 28 17:26:48 crc kubenswrapper[4710]: I1128 17:26:48.532218 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" event={"ID":"24989137-409c-4abb-96da-a28e2382b122","Type":"ContainerDied","Data":"6794ba9cbd4177726e6d01e3701be8f4ab626f8c32b645ff039d6737389da13d"} Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.199725 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.265814 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-ssh-key\") pod \"24989137-409c-4abb-96da-a28e2382b122\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.266068 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6vr9\" (UniqueName: \"kubernetes.io/projected/24989137-409c-4abb-96da-a28e2382b122-kube-api-access-c6vr9\") pod \"24989137-409c-4abb-96da-a28e2382b122\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.266101 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-inventory\") pod \"24989137-409c-4abb-96da-a28e2382b122\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.266132 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-bootstrap-combined-ca-bundle\") pod \"24989137-409c-4abb-96da-a28e2382b122\" (UID: \"24989137-409c-4abb-96da-a28e2382b122\") " Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.272457 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24989137-409c-4abb-96da-a28e2382b122-kube-api-access-c6vr9" (OuterVolumeSpecName: "kube-api-access-c6vr9") pod "24989137-409c-4abb-96da-a28e2382b122" (UID: "24989137-409c-4abb-96da-a28e2382b122"). InnerVolumeSpecName "kube-api-access-c6vr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.272501 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "24989137-409c-4abb-96da-a28e2382b122" (UID: "24989137-409c-4abb-96da-a28e2382b122"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.299383 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "24989137-409c-4abb-96da-a28e2382b122" (UID: "24989137-409c-4abb-96da-a28e2382b122"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.302873 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-inventory" (OuterVolumeSpecName: "inventory") pod "24989137-409c-4abb-96da-a28e2382b122" (UID: "24989137-409c-4abb-96da-a28e2382b122"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.369342 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.369569 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6vr9\" (UniqueName: \"kubernetes.io/projected/24989137-409c-4abb-96da-a28e2382b122-kube-api-access-c6vr9\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.369649 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.369722 4710 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24989137-409c-4abb-96da-a28e2382b122-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.708572 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" event={"ID":"24989137-409c-4abb-96da-a28e2382b122","Type":"ContainerDied","Data":"db4e3a5d048ddca6d525be7c2b40ba8459767c01f19e2e603479dd0a55acab62"} Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.708849 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db4e3a5d048ddca6d525be7c2b40ba8459767c01f19e2e603479dd0a55acab62" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.708631 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.754338 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp"] Nov 28 17:26:50 crc kubenswrapper[4710]: E1128 17:26:50.755197 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerName="extract-content" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.755222 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerName="extract-content" Nov 28 17:26:50 crc kubenswrapper[4710]: E1128 17:26:50.755267 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24989137-409c-4abb-96da-a28e2382b122" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.755279 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="24989137-409c-4abb-96da-a28e2382b122" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 17:26:50 crc kubenswrapper[4710]: E1128 17:26:50.755318 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerName="registry-server" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.755350 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerName="registry-server" Nov 28 17:26:50 crc kubenswrapper[4710]: E1128 17:26:50.755371 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerName="extract-utilities" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.755383 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerName="extract-utilities" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.755837 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="5263d384-33b1-4e7e-8377-f5e0ebf04372" containerName="registry-server" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.755878 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="24989137-409c-4abb-96da-a28e2382b122" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.756949 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.759429 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.759911 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.760141 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.760180 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.769449 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp"] Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.781795 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t777\" (UniqueName: \"kubernetes.io/projected/e16d30ed-d490-425c-804b-c633d6286195-kube-api-access-6t777\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.781935 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.782010 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.883814 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.883934 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t777\" (UniqueName: \"kubernetes.io/projected/e16d30ed-d490-425c-804b-c633d6286195-kube-api-access-6t777\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.884032 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.889042 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.889042 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:50 crc kubenswrapper[4710]: I1128 17:26:50.910827 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t777\" (UniqueName: \"kubernetes.io/projected/e16d30ed-d490-425c-804b-c633d6286195-kube-api-access-6t777\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:51 crc kubenswrapper[4710]: I1128 17:26:51.082679 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:26:51 crc kubenswrapper[4710]: I1128 17:26:51.151558 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:26:51 crc kubenswrapper[4710]: E1128 17:26:51.152108 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:26:51 crc kubenswrapper[4710]: I1128 17:26:51.622095 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp"] Nov 28 17:26:51 crc kubenswrapper[4710]: I1128 17:26:51.718539 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" event={"ID":"e16d30ed-d490-425c-804b-c633d6286195","Type":"ContainerStarted","Data":"f0c934dc59ebab2acd272d55892a81ac5609a0cbb4e526389e5b3da2f1bcb641"} Nov 28 17:26:52 crc kubenswrapper[4710]: I1128 17:26:52.787840 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" event={"ID":"e16d30ed-d490-425c-804b-c633d6286195","Type":"ContainerStarted","Data":"1d90faecf9582029c2c49d5b89f9d7beeb6051894e7ae21da823be7f6abb691e"} Nov 28 17:26:52 crc kubenswrapper[4710]: I1128 17:26:52.809477 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" podStartSLOduration=2.390433648 podStartE2EDuration="2.809455682s" podCreationTimestamp="2025-11-28 17:26:50 +0000 UTC" firstStartedPulling="2025-11-28 17:26:51.621179448 +0000 UTC m=+1700.879479493" lastFinishedPulling="2025-11-28 17:26:52.040201482 +0000 UTC m=+1701.298501527" observedRunningTime="2025-11-28 17:26:52.806699784 +0000 UTC m=+1702.064999829" watchObservedRunningTime="2025-11-28 17:26:52.809455682 +0000 UTC m=+1702.067755727" Nov 28 17:27:02 crc kubenswrapper[4710]: I1128 17:27:02.142210 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:27:02 crc kubenswrapper[4710]: E1128 17:27:02.143004 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:27:17 crc kubenswrapper[4710]: I1128 17:27:17.141822 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:27:17 crc kubenswrapper[4710]: E1128 17:27:17.142825 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:27:28 crc kubenswrapper[4710]: I1128 17:27:28.141548 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:27:28 crc kubenswrapper[4710]: E1128 17:27:28.142512 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:27:40 crc kubenswrapper[4710]: I1128 17:27:40.141826 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:27:40 crc kubenswrapper[4710]: E1128 17:27:40.142505 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:27:51 crc kubenswrapper[4710]: I1128 17:27:51.152827 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:27:51 crc kubenswrapper[4710]: E1128 17:27:51.153748 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:28:02 crc kubenswrapper[4710]: I1128 17:28:02.074048 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6571-account-create-update-rg7kx"] Nov 28 17:28:02 crc kubenswrapper[4710]: I1128 17:28:02.095248 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jcjn5"] Nov 28 17:28:02 crc kubenswrapper[4710]: I1128 17:28:02.114338 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jcjn5"] Nov 28 17:28:02 crc kubenswrapper[4710]: I1128 17:28:02.130750 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6571-account-create-update-rg7kx"] Nov 28 17:28:03 crc kubenswrapper[4710]: I1128 17:28:03.030491 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6b0c-account-create-update-m2272"] Nov 28 17:28:03 crc kubenswrapper[4710]: I1128 17:28:03.042997 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6b0c-account-create-update-m2272"] Nov 28 17:28:03 crc kubenswrapper[4710]: I1128 17:28:03.152332 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1053ff5c-8aab-40a1-8a79-6f85ab9a2be5" path="/var/lib/kubelet/pods/1053ff5c-8aab-40a1-8a79-6f85ab9a2be5/volumes" Nov 28 17:28:03 crc kubenswrapper[4710]: I1128 17:28:03.152980 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afdf0bd2-a972-4148-9e0f-49f5d1f90f1c" path="/var/lib/kubelet/pods/afdf0bd2-a972-4148-9e0f-49f5d1f90f1c/volumes" Nov 28 17:28:03 crc kubenswrapper[4710]: I1128 17:28:03.154005 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ece12b78-9c4f-44aa-bb24-2737fca7003c" path="/var/lib/kubelet/pods/ece12b78-9c4f-44aa-bb24-2737fca7003c/volumes" Nov 28 17:28:04 crc kubenswrapper[4710]: I1128 17:28:04.033044 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-rx2v2"] Nov 28 17:28:04 crc kubenswrapper[4710]: I1128 17:28:04.058374 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-c6a1-account-create-update-6jclb"] Nov 28 17:28:04 crc kubenswrapper[4710]: I1128 17:28:04.071281 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-sm9v2"] Nov 28 17:28:04 crc kubenswrapper[4710]: I1128 17:28:04.126112 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-rx2v2"] Nov 28 17:28:04 crc kubenswrapper[4710]: I1128 17:28:04.137266 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-sm9v2"] Nov 28 17:28:04 crc kubenswrapper[4710]: I1128 17:28:04.147880 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-c6a1-account-create-update-6jclb"] Nov 28 17:28:05 crc kubenswrapper[4710]: I1128 17:28:05.153527 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ad53638-4b25-4cd6-bbd3-dcb7e577467e" path="/var/lib/kubelet/pods/8ad53638-4b25-4cd6-bbd3-dcb7e577467e/volumes" Nov 28 17:28:05 crc kubenswrapper[4710]: I1128 17:28:05.154529 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e97a646-985c-4a67-8cb6-c817e73c30e2" path="/var/lib/kubelet/pods/9e97a646-985c-4a67-8cb6-c817e73c30e2/volumes" Nov 28 17:28:05 crc kubenswrapper[4710]: I1128 17:28:05.155271 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f43f5116-81fd-41d7-8509-1ff325cce28a" path="/var/lib/kubelet/pods/f43f5116-81fd-41d7-8509-1ff325cce28a/volumes" Nov 28 17:28:06 crc kubenswrapper[4710]: I1128 17:28:06.142200 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:28:06 crc kubenswrapper[4710]: E1128 17:28:06.142523 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:28:17 crc kubenswrapper[4710]: I1128 17:28:17.142065 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:28:17 crc kubenswrapper[4710]: E1128 17:28:17.142845 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:28:21 crc kubenswrapper[4710]: I1128 17:28:21.755792 4710 generic.go:334] "Generic (PLEG): container finished" podID="e16d30ed-d490-425c-804b-c633d6286195" containerID="1d90faecf9582029c2c49d5b89f9d7beeb6051894e7ae21da823be7f6abb691e" exitCode=0 Nov 28 17:28:21 crc kubenswrapper[4710]: I1128 17:28:21.755866 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" event={"ID":"e16d30ed-d490-425c-804b-c633d6286195","Type":"ContainerDied","Data":"1d90faecf9582029c2c49d5b89f9d7beeb6051894e7ae21da823be7f6abb691e"} Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.191830 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.305072 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-ssh-key\") pod \"e16d30ed-d490-425c-804b-c633d6286195\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.305203 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-inventory\") pod \"e16d30ed-d490-425c-804b-c633d6286195\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.305327 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t777\" (UniqueName: \"kubernetes.io/projected/e16d30ed-d490-425c-804b-c633d6286195-kube-api-access-6t777\") pod \"e16d30ed-d490-425c-804b-c633d6286195\" (UID: \"e16d30ed-d490-425c-804b-c633d6286195\") " Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.312940 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16d30ed-d490-425c-804b-c633d6286195-kube-api-access-6t777" (OuterVolumeSpecName: "kube-api-access-6t777") pod "e16d30ed-d490-425c-804b-c633d6286195" (UID: "e16d30ed-d490-425c-804b-c633d6286195"). InnerVolumeSpecName "kube-api-access-6t777". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.344962 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-inventory" (OuterVolumeSpecName: "inventory") pod "e16d30ed-d490-425c-804b-c633d6286195" (UID: "e16d30ed-d490-425c-804b-c633d6286195"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.350329 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e16d30ed-d490-425c-804b-c633d6286195" (UID: "e16d30ed-d490-425c-804b-c633d6286195"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.407341 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.407375 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e16d30ed-d490-425c-804b-c633d6286195-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.407384 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t777\" (UniqueName: \"kubernetes.io/projected/e16d30ed-d490-425c-804b-c633d6286195-kube-api-access-6t777\") on node \"crc\" DevicePath \"\"" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.777874 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" event={"ID":"e16d30ed-d490-425c-804b-c633d6286195","Type":"ContainerDied","Data":"f0c934dc59ebab2acd272d55892a81ac5609a0cbb4e526389e5b3da2f1bcb641"} Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.777917 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0c934dc59ebab2acd272d55892a81ac5609a0cbb4e526389e5b3da2f1bcb641" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.777986 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.855939 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k"] Nov 28 17:28:23 crc kubenswrapper[4710]: E1128 17:28:23.856425 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e16d30ed-d490-425c-804b-c633d6286195" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.856443 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e16d30ed-d490-425c-804b-c633d6286195" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.856673 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e16d30ed-d490-425c-804b-c633d6286195" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.857520 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.859759 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.859810 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.860097 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.860274 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.867328 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k"] Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.917241 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.917304 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:23 crc kubenswrapper[4710]: I1128 17:28:23.917330 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wj8j\" (UniqueName: \"kubernetes.io/projected/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-kube-api-access-5wj8j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.019148 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.019204 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.019230 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wj8j\" (UniqueName: \"kubernetes.io/projected/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-kube-api-access-5wj8j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.024152 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.026102 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.045935 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wj8j\" (UniqueName: \"kubernetes.io/projected/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-kube-api-access-5wj8j\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ql95k\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.185010 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.716804 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k"] Nov 28 17:28:24 crc kubenswrapper[4710]: W1128 17:28:24.718565 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fc16997_7ac9_4f0f_aec1_32bed7b875b0.slice/crio-986134109a6f76674fd5eb7e86865d0bfc1f14ec4f6ce778c513f98439aa37e2 WatchSource:0}: Error finding container 986134109a6f76674fd5eb7e86865d0bfc1f14ec4f6ce778c513f98439aa37e2: Status 404 returned error can't find the container with id 986134109a6f76674fd5eb7e86865d0bfc1f14ec4f6ce778c513f98439aa37e2 Nov 28 17:28:24 crc kubenswrapper[4710]: I1128 17:28:24.789874 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" event={"ID":"6fc16997-7ac9-4f0f-aec1-32bed7b875b0","Type":"ContainerStarted","Data":"986134109a6f76674fd5eb7e86865d0bfc1f14ec4f6ce778c513f98439aa37e2"} Nov 28 17:28:25 crc kubenswrapper[4710]: I1128 17:28:25.807380 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" event={"ID":"6fc16997-7ac9-4f0f-aec1-32bed7b875b0","Type":"ContainerStarted","Data":"9717253cee7535151d1166b3ceac316dae07cc305f90522263e0f9a139818855"} Nov 28 17:28:25 crc kubenswrapper[4710]: I1128 17:28:25.822651 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" podStartSLOduration=2.212534358 podStartE2EDuration="2.822632358s" podCreationTimestamp="2025-11-28 17:28:23 +0000 UTC" firstStartedPulling="2025-11-28 17:28:24.721067621 +0000 UTC m=+1793.979367666" lastFinishedPulling="2025-11-28 17:28:25.331165621 +0000 UTC m=+1794.589465666" observedRunningTime="2025-11-28 17:28:25.821134301 +0000 UTC m=+1795.079434356" watchObservedRunningTime="2025-11-28 17:28:25.822632358 +0000 UTC m=+1795.080932403" Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.067686 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-rrxrk"] Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.081602 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ab78-account-create-update-pxdld"] Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.091515 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-rrxrk"] Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.102054 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-m7vnw"] Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.111286 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ab78-account-create-update-pxdld"] Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.119628 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-m7vnw"] Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.142306 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:28:29 crc kubenswrapper[4710]: E1128 17:28:29.142637 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.156370 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e03ebace-5cad-464c-bcff-4ba2c6b50467" path="/var/lib/kubelet/pods/e03ebace-5cad-464c-bcff-4ba2c6b50467/volumes" Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.158680 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebce039b-b25e-4102-bfd5-f55b7f0fa9b8" path="/var/lib/kubelet/pods/ebce039b-b25e-4102-bfd5-f55b7f0fa9b8/volumes" Nov 28 17:28:29 crc kubenswrapper[4710]: I1128 17:28:29.159469 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff0ceecf-c774-4ee0-875b-44d4f58288a7" path="/var/lib/kubelet/pods/ff0ceecf-c774-4ee0-875b-44d4f58288a7/volumes" Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.027255 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-m6cp2"] Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.035887 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-68ab-account-create-update-k9hvj"] Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.044834 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3fab-account-create-update-b5ps7"] Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.054102 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-68ab-account-create-update-k9hvj"] Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.061837 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-m6cp2"] Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.069856 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3fab-account-create-update-b5ps7"] Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.153572 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fdd3903-bbd8-4721-ae3b-866cbc2a73a7" path="/var/lib/kubelet/pods/3fdd3903-bbd8-4721-ae3b-866cbc2a73a7/volumes" Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.154543 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc0af7ed-b562-4fcf-aaa1-f8b769241a67" path="/var/lib/kubelet/pods/dc0af7ed-b562-4fcf-aaa1-f8b769241a67/volumes" Nov 28 17:28:33 crc kubenswrapper[4710]: I1128 17:28:33.155311 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fac18c14-9769-4d41-b867-de23b4a81a79" path="/var/lib/kubelet/pods/fac18c14-9769-4d41-b867-de23b4a81a79/volumes" Nov 28 17:28:38 crc kubenswrapper[4710]: I1128 17:28:38.035163 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-rt2kz"] Nov 28 17:28:38 crc kubenswrapper[4710]: I1128 17:28:38.047694 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-rt2kz"] Nov 28 17:28:39 crc kubenswrapper[4710]: I1128 17:28:39.160460 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82dc6718-a141-4d1c-83b0-b08f4d5a8708" path="/var/lib/kubelet/pods/82dc6718-a141-4d1c-83b0-b08f4d5a8708/volumes" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.142796 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:28:40 crc kubenswrapper[4710]: E1128 17:28:40.143295 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.675927 4710 scope.go:117] "RemoveContainer" containerID="8a725516ff45127561ac16fc0247e7efe9a6998a964fcbe71a7f3a44e88519ee" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.707137 4710 scope.go:117] "RemoveContainer" containerID="9d349382d11f56eb75155aeae7b9d92047fb6e48c98fac5f8a3db865e03a0a54" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.776008 4710 scope.go:117] "RemoveContainer" containerID="9d0d476635d4b4b703a7830df150d8bdfd482008b14617d2c62e44618860b199" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.823460 4710 scope.go:117] "RemoveContainer" containerID="bf0dd181cc047d9e06a7b13646c1c9f33ed4b7598cf819c0c23fb318707d6d08" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.875220 4710 scope.go:117] "RemoveContainer" containerID="294b0a45e9b55f2afe78da566f64370a5e943eb127ad0ae9a3e7939ee24d4927" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.924565 4710 scope.go:117] "RemoveContainer" containerID="8b80f8bc25903344d34bef3d1815d369c98f0f341ea0d5e3cd7c4f8592c1ebc6" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.972684 4710 scope.go:117] "RemoveContainer" containerID="86ffcc08560ffa52ab084f23dd09bbff2bb05a822b4ddda4323f5971c78d2911" Nov 28 17:28:40 crc kubenswrapper[4710]: I1128 17:28:40.993784 4710 scope.go:117] "RemoveContainer" containerID="388cda1b5dde397b07d801e697d061f62f7971e0a9bef69ee6d89a677bc12347" Nov 28 17:28:41 crc kubenswrapper[4710]: I1128 17:28:41.014770 4710 scope.go:117] "RemoveContainer" containerID="d91b710966156edb3b1ee13fae8606e3ed707217b28cf5729f4f5e6259f2a5e0" Nov 28 17:28:41 crc kubenswrapper[4710]: I1128 17:28:41.039037 4710 scope.go:117] "RemoveContainer" containerID="cb98005ba3317cd7fba72c4655926b8c9a2ec6c45621dcfb53deff26b1c2bd50" Nov 28 17:28:41 crc kubenswrapper[4710]: I1128 17:28:41.081611 4710 scope.go:117] "RemoveContainer" containerID="74358610562ca38634a776eaaaed7138a2760140e166c7e08d5e1c9dd7c1335c" Nov 28 17:28:41 crc kubenswrapper[4710]: I1128 17:28:41.107536 4710 scope.go:117] "RemoveContainer" containerID="99ce6b0bc21226ae929aef16d5409005cf8a1690d76cde10a8a9ef6255fef34f" Nov 28 17:28:41 crc kubenswrapper[4710]: I1128 17:28:41.135542 4710 scope.go:117] "RemoveContainer" containerID="cc63ebbeb782c1957f113a9d46257d10dec462ae32af3889d56f766308a77fcb" Nov 28 17:28:55 crc kubenswrapper[4710]: I1128 17:28:55.142677 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:28:55 crc kubenswrapper[4710]: E1128 17:28:55.143455 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:29:04 crc kubenswrapper[4710]: I1128 17:29:04.043891 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xw8td"] Nov 28 17:29:04 crc kubenswrapper[4710]: I1128 17:29:04.054105 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xw8td"] Nov 28 17:29:05 crc kubenswrapper[4710]: I1128 17:29:05.030733 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-zl7sx"] Nov 28 17:29:05 crc kubenswrapper[4710]: I1128 17:29:05.042062 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-zl7sx"] Nov 28 17:29:05 crc kubenswrapper[4710]: I1128 17:29:05.153798 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3835d37-f072-4310-a667-a7f398e80ab1" path="/var/lib/kubelet/pods/a3835d37-f072-4310-a667-a7f398e80ab1/volumes" Nov 28 17:29:05 crc kubenswrapper[4710]: I1128 17:29:05.155111 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0a3540-9534-46cf-8ecd-c32878e75b08" path="/var/lib/kubelet/pods/df0a3540-9534-46cf-8ecd-c32878e75b08/volumes" Nov 28 17:29:08 crc kubenswrapper[4710]: I1128 17:29:08.141502 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:29:08 crc kubenswrapper[4710]: E1128 17:29:08.142162 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:29:09 crc kubenswrapper[4710]: I1128 17:29:09.035158 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-n5chx"] Nov 28 17:29:09 crc kubenswrapper[4710]: I1128 17:29:09.044911 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-n5chx"] Nov 28 17:29:09 crc kubenswrapper[4710]: I1128 17:29:09.155118 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cab500d6-0a90-45c1-b760-53db118834a3" path="/var/lib/kubelet/pods/cab500d6-0a90-45c1-b760-53db118834a3/volumes" Nov 28 17:29:15 crc kubenswrapper[4710]: I1128 17:29:15.035222 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mtmgk"] Nov 28 17:29:15 crc kubenswrapper[4710]: I1128 17:29:15.048969 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mtmgk"] Nov 28 17:29:15 crc kubenswrapper[4710]: I1128 17:29:15.155067 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0502e48f-0338-42fa-9403-e87c11997261" path="/var/lib/kubelet/pods/0502e48f-0338-42fa-9403-e87c11997261/volumes" Nov 28 17:29:19 crc kubenswrapper[4710]: I1128 17:29:19.141617 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:29:19 crc kubenswrapper[4710]: E1128 17:29:19.142477 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:29:23 crc kubenswrapper[4710]: I1128 17:29:23.050950 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9mv8x"] Nov 28 17:29:23 crc kubenswrapper[4710]: I1128 17:29:23.061906 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9mv8x"] Nov 28 17:29:23 crc kubenswrapper[4710]: I1128 17:29:23.155567 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03a0db7-fab9-4d77-8f2e-368c122983ca" path="/var/lib/kubelet/pods/f03a0db7-fab9-4d77-8f2e-368c122983ca/volumes" Nov 28 17:29:30 crc kubenswrapper[4710]: I1128 17:29:30.141781 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:29:30 crc kubenswrapper[4710]: E1128 17:29:30.142370 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:29:36 crc kubenswrapper[4710]: I1128 17:29:36.540809 4710 generic.go:334] "Generic (PLEG): container finished" podID="6fc16997-7ac9-4f0f-aec1-32bed7b875b0" containerID="9717253cee7535151d1166b3ceac316dae07cc305f90522263e0f9a139818855" exitCode=0 Nov 28 17:29:36 crc kubenswrapper[4710]: I1128 17:29:36.540888 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" event={"ID":"6fc16997-7ac9-4f0f-aec1-32bed7b875b0","Type":"ContainerDied","Data":"9717253cee7535151d1166b3ceac316dae07cc305f90522263e0f9a139818855"} Nov 28 17:29:37 crc kubenswrapper[4710]: I1128 17:29:37.072004 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-f2xjj"] Nov 28 17:29:37 crc kubenswrapper[4710]: I1128 17:29:37.082911 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-f2xjj"] Nov 28 17:29:37 crc kubenswrapper[4710]: I1128 17:29:37.153610 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eedde5de-ead1-462b-a55f-3473c0f09f43" path="/var/lib/kubelet/pods/eedde5de-ead1-462b-a55f-3473c0f09f43/volumes" Nov 28 17:29:37 crc kubenswrapper[4710]: I1128 17:29:37.991944 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.130069 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-ssh-key\") pod \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.130123 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-inventory\") pod \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.130387 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wj8j\" (UniqueName: \"kubernetes.io/projected/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-kube-api-access-5wj8j\") pod \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\" (UID: \"6fc16997-7ac9-4f0f-aec1-32bed7b875b0\") " Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.135470 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-kube-api-access-5wj8j" (OuterVolumeSpecName: "kube-api-access-5wj8j") pod "6fc16997-7ac9-4f0f-aec1-32bed7b875b0" (UID: "6fc16997-7ac9-4f0f-aec1-32bed7b875b0"). InnerVolumeSpecName "kube-api-access-5wj8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.162221 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6fc16997-7ac9-4f0f-aec1-32bed7b875b0" (UID: "6fc16997-7ac9-4f0f-aec1-32bed7b875b0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.182740 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-inventory" (OuterVolumeSpecName: "inventory") pod "6fc16997-7ac9-4f0f-aec1-32bed7b875b0" (UID: "6fc16997-7ac9-4f0f-aec1-32bed7b875b0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.233487 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wj8j\" (UniqueName: \"kubernetes.io/projected/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-kube-api-access-5wj8j\") on node \"crc\" DevicePath \"\"" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.233523 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.233534 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fc16997-7ac9-4f0f-aec1-32bed7b875b0-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.567837 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" event={"ID":"6fc16997-7ac9-4f0f-aec1-32bed7b875b0","Type":"ContainerDied","Data":"986134109a6f76674fd5eb7e86865d0bfc1f14ec4f6ce778c513f98439aa37e2"} Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.567882 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="986134109a6f76674fd5eb7e86865d0bfc1f14ec4f6ce778c513f98439aa37e2" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.567895 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ql95k" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.665222 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv"] Nov 28 17:29:38 crc kubenswrapper[4710]: E1128 17:29:38.665924 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fc16997-7ac9-4f0f-aec1-32bed7b875b0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.665955 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fc16997-7ac9-4f0f-aec1-32bed7b875b0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.666280 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fc16997-7ac9-4f0f-aec1-32bed7b875b0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.667515 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.670847 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.670897 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.671405 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.679454 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.695252 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv"] Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.743784 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.744477 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dlgd\" (UniqueName: \"kubernetes.io/projected/599fb57d-7ff9-42b2-bee1-30f542a56d12-kube-api-access-4dlgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.745066 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.849454 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dlgd\" (UniqueName: \"kubernetes.io/projected/599fb57d-7ff9-42b2-bee1-30f542a56d12-kube-api-access-4dlgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.849719 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.849883 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.857247 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.860498 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.879027 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dlgd\" (UniqueName: \"kubernetes.io/projected/599fb57d-7ff9-42b2-bee1-30f542a56d12-kube-api-access-4dlgd\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:38 crc kubenswrapper[4710]: I1128 17:29:38.999502 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:39 crc kubenswrapper[4710]: I1128 17:29:39.582560 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv"] Nov 28 17:29:39 crc kubenswrapper[4710]: W1128 17:29:39.584428 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod599fb57d_7ff9_42b2_bee1_30f542a56d12.slice/crio-c2ea163a2ab3de6783f57be0b13c39cc038789207f2080cec4efe9dbf74cb36e WatchSource:0}: Error finding container c2ea163a2ab3de6783f57be0b13c39cc038789207f2080cec4efe9dbf74cb36e: Status 404 returned error can't find the container with id c2ea163a2ab3de6783f57be0b13c39cc038789207f2080cec4efe9dbf74cb36e Nov 28 17:29:39 crc kubenswrapper[4710]: I1128 17:29:39.589049 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:29:40 crc kubenswrapper[4710]: I1128 17:29:40.587985 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" event={"ID":"599fb57d-7ff9-42b2-bee1-30f542a56d12","Type":"ContainerStarted","Data":"d04be5d0989fd9d715e305704bb1a234a8242048eb340d62eddbe4fb1c949abd"} Nov 28 17:29:40 crc kubenswrapper[4710]: I1128 17:29:40.588332 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" event={"ID":"599fb57d-7ff9-42b2-bee1-30f542a56d12","Type":"ContainerStarted","Data":"c2ea163a2ab3de6783f57be0b13c39cc038789207f2080cec4efe9dbf74cb36e"} Nov 28 17:29:40 crc kubenswrapper[4710]: I1128 17:29:40.607468 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" podStartSLOduration=2.105663581 podStartE2EDuration="2.607449716s" podCreationTimestamp="2025-11-28 17:29:38 +0000 UTC" firstStartedPulling="2025-11-28 17:29:39.588344582 +0000 UTC m=+1868.846644647" lastFinishedPulling="2025-11-28 17:29:40.090130727 +0000 UTC m=+1869.348430782" observedRunningTime="2025-11-28 17:29:40.60221651 +0000 UTC m=+1869.860516565" watchObservedRunningTime="2025-11-28 17:29:40.607449716 +0000 UTC m=+1869.865749761" Nov 28 17:29:41 crc kubenswrapper[4710]: I1128 17:29:41.449287 4710 scope.go:117] "RemoveContainer" containerID="f29664a6a5bf62a66f20f2c248f0af3bf4caaba8bf83feafdbfd1f78f62e8fb0" Nov 28 17:29:41 crc kubenswrapper[4710]: I1128 17:29:41.497867 4710 scope.go:117] "RemoveContainer" containerID="ee00b88d2fd20227ce434deceb3a2801039dbc78f7fa0413ec0b7e6dc9387ecb" Nov 28 17:29:41 crc kubenswrapper[4710]: I1128 17:29:41.571989 4710 scope.go:117] "RemoveContainer" containerID="4af03b23471f9f2bd5093dfe34255de6e6c35f8acc71fefa583e1569cc1c3392" Nov 28 17:29:41 crc kubenswrapper[4710]: I1128 17:29:41.634457 4710 scope.go:117] "RemoveContainer" containerID="2a2518cb61eda9edc870303286bc6c255c0b39265f87554c7f3078eb3c5546c3" Nov 28 17:29:41 crc kubenswrapper[4710]: I1128 17:29:41.695868 4710 scope.go:117] "RemoveContainer" containerID="10f6124b673a813aceb84e9ef92ced2a7ba126aa788aff51c77d30ac183cac24" Nov 28 17:29:41 crc kubenswrapper[4710]: I1128 17:29:41.744483 4710 scope.go:117] "RemoveContainer" containerID="7d5076a971ad39755d96e5c6f6fb865b1796577214752160bf982a0ee5c69b44" Nov 28 17:29:42 crc kubenswrapper[4710]: I1128 17:29:42.141879 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:29:42 crc kubenswrapper[4710]: E1128 17:29:42.142134 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:29:45 crc kubenswrapper[4710]: I1128 17:29:45.643979 4710 generic.go:334] "Generic (PLEG): container finished" podID="599fb57d-7ff9-42b2-bee1-30f542a56d12" containerID="d04be5d0989fd9d715e305704bb1a234a8242048eb340d62eddbe4fb1c949abd" exitCode=0 Nov 28 17:29:45 crc kubenswrapper[4710]: I1128 17:29:45.644070 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" event={"ID":"599fb57d-7ff9-42b2-bee1-30f542a56d12","Type":"ContainerDied","Data":"d04be5d0989fd9d715e305704bb1a234a8242048eb340d62eddbe4fb1c949abd"} Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.116444 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.240736 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dlgd\" (UniqueName: \"kubernetes.io/projected/599fb57d-7ff9-42b2-bee1-30f542a56d12-kube-api-access-4dlgd\") pod \"599fb57d-7ff9-42b2-bee1-30f542a56d12\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.240905 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-inventory\") pod \"599fb57d-7ff9-42b2-bee1-30f542a56d12\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.241097 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-ssh-key\") pod \"599fb57d-7ff9-42b2-bee1-30f542a56d12\" (UID: \"599fb57d-7ff9-42b2-bee1-30f542a56d12\") " Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.246838 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599fb57d-7ff9-42b2-bee1-30f542a56d12-kube-api-access-4dlgd" (OuterVolumeSpecName: "kube-api-access-4dlgd") pod "599fb57d-7ff9-42b2-bee1-30f542a56d12" (UID: "599fb57d-7ff9-42b2-bee1-30f542a56d12"). InnerVolumeSpecName "kube-api-access-4dlgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.271493 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "599fb57d-7ff9-42b2-bee1-30f542a56d12" (UID: "599fb57d-7ff9-42b2-bee1-30f542a56d12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.277312 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-inventory" (OuterVolumeSpecName: "inventory") pod "599fb57d-7ff9-42b2-bee1-30f542a56d12" (UID: "599fb57d-7ff9-42b2-bee1-30f542a56d12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.344067 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.344114 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/599fb57d-7ff9-42b2-bee1-30f542a56d12-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.344126 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dlgd\" (UniqueName: \"kubernetes.io/projected/599fb57d-7ff9-42b2-bee1-30f542a56d12-kube-api-access-4dlgd\") on node \"crc\" DevicePath \"\"" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.669910 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" event={"ID":"599fb57d-7ff9-42b2-bee1-30f542a56d12","Type":"ContainerDied","Data":"c2ea163a2ab3de6783f57be0b13c39cc038789207f2080cec4efe9dbf74cb36e"} Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.669955 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ea163a2ab3de6783f57be0b13c39cc038789207f2080cec4efe9dbf74cb36e" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.669994 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.741823 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2"] Nov 28 17:29:47 crc kubenswrapper[4710]: E1128 17:29:47.742475 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599fb57d-7ff9-42b2-bee1-30f542a56d12" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.742502 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="599fb57d-7ff9-42b2-bee1-30f542a56d12" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.742796 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="599fb57d-7ff9-42b2-bee1-30f542a56d12" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.743615 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.746443 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.746461 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.746499 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.746709 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.755702 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2"] Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.852796 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.853100 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46ctf\" (UniqueName: \"kubernetes.io/projected/762129bb-bd6f-46a3-87e5-38b37476e994-kube-api-access-46ctf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.853186 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.954714 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.954772 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46ctf\" (UniqueName: \"kubernetes.io/projected/762129bb-bd6f-46a3-87e5-38b37476e994-kube-api-access-46ctf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.954854 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.960647 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.962751 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:47 crc kubenswrapper[4710]: I1128 17:29:47.973369 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46ctf\" (UniqueName: \"kubernetes.io/projected/762129bb-bd6f-46a3-87e5-38b37476e994-kube-api-access-46ctf\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-gv2n2\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:48 crc kubenswrapper[4710]: I1128 17:29:48.065984 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:29:48 crc kubenswrapper[4710]: I1128 17:29:48.633386 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2"] Nov 28 17:29:48 crc kubenswrapper[4710]: I1128 17:29:48.681437 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" event={"ID":"762129bb-bd6f-46a3-87e5-38b37476e994","Type":"ContainerStarted","Data":"2944aac9349b368325dedbe287767f5984daab79e58ec4f825f2a3235ffcc28a"} Nov 28 17:29:49 crc kubenswrapper[4710]: I1128 17:29:49.694017 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" event={"ID":"762129bb-bd6f-46a3-87e5-38b37476e994","Type":"ContainerStarted","Data":"5b67b255b9093dae8ea78bfbd0225e0c416a210e1ec6ebdab848434c0e588f5b"} Nov 28 17:29:49 crc kubenswrapper[4710]: I1128 17:29:49.724105 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" podStartSLOduration=2.3014105320000002 podStartE2EDuration="2.724088702s" podCreationTimestamp="2025-11-28 17:29:47 +0000 UTC" firstStartedPulling="2025-11-28 17:29:48.631662996 +0000 UTC m=+1877.889963041" lastFinishedPulling="2025-11-28 17:29:49.054341166 +0000 UTC m=+1878.312641211" observedRunningTime="2025-11-28 17:29:49.719640111 +0000 UTC m=+1878.977940156" watchObservedRunningTime="2025-11-28 17:29:49.724088702 +0000 UTC m=+1878.982388747" Nov 28 17:29:56 crc kubenswrapper[4710]: I1128 17:29:56.141509 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:29:56 crc kubenswrapper[4710]: I1128 17:29:56.763162 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"525474b05cc0e8cfad42d7334f3128ea31ed4f5fe6977e6899ad8e185ddc6855"} Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.140617 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr"] Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.143191 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.147912 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.147918 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.175087 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr"] Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.236592 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvvvq\" (UniqueName: \"kubernetes.io/projected/b453cc63-29be-455b-92a8-29793a6e6d69-kube-api-access-gvvvq\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.236649 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b453cc63-29be-455b-92a8-29793a6e6d69-secret-volume\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.236987 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b453cc63-29be-455b-92a8-29793a6e6d69-config-volume\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.338518 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvvvq\" (UniqueName: \"kubernetes.io/projected/b453cc63-29be-455b-92a8-29793a6e6d69-kube-api-access-gvvvq\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.338562 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b453cc63-29be-455b-92a8-29793a6e6d69-secret-volume\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.338698 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b453cc63-29be-455b-92a8-29793a6e6d69-config-volume\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.339673 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b453cc63-29be-455b-92a8-29793a6e6d69-config-volume\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.344940 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b453cc63-29be-455b-92a8-29793a6e6d69-secret-volume\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.354935 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvvvq\" (UniqueName: \"kubernetes.io/projected/b453cc63-29be-455b-92a8-29793a6e6d69-kube-api-access-gvvvq\") pod \"collect-profiles-29405850-5tzkr\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.475965 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:00 crc kubenswrapper[4710]: I1128 17:30:00.957417 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr"] Nov 28 17:30:00 crc kubenswrapper[4710]: W1128 17:30:00.958749 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb453cc63_29be_455b_92a8_29793a6e6d69.slice/crio-25e80001b20dbdc11c2cd276034a1c4206330572dc7714de88bdc97cafb204f7 WatchSource:0}: Error finding container 25e80001b20dbdc11c2cd276034a1c4206330572dc7714de88bdc97cafb204f7: Status 404 returned error can't find the container with id 25e80001b20dbdc11c2cd276034a1c4206330572dc7714de88bdc97cafb204f7 Nov 28 17:30:01 crc kubenswrapper[4710]: I1128 17:30:01.809136 4710 generic.go:334] "Generic (PLEG): container finished" podID="b453cc63-29be-455b-92a8-29793a6e6d69" containerID="e7fd32741df56d14e2359ee699d3ad51e1b44157fb1d4a96b0b08796c5dceece" exitCode=0 Nov 28 17:30:01 crc kubenswrapper[4710]: I1128 17:30:01.809195 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" event={"ID":"b453cc63-29be-455b-92a8-29793a6e6d69","Type":"ContainerDied","Data":"e7fd32741df56d14e2359ee699d3ad51e1b44157fb1d4a96b0b08796c5dceece"} Nov 28 17:30:01 crc kubenswrapper[4710]: I1128 17:30:01.809516 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" event={"ID":"b453cc63-29be-455b-92a8-29793a6e6d69","Type":"ContainerStarted","Data":"25e80001b20dbdc11c2cd276034a1c4206330572dc7714de88bdc97cafb204f7"} Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.176077 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.305479 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvvvq\" (UniqueName: \"kubernetes.io/projected/b453cc63-29be-455b-92a8-29793a6e6d69-kube-api-access-gvvvq\") pod \"b453cc63-29be-455b-92a8-29793a6e6d69\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.305570 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b453cc63-29be-455b-92a8-29793a6e6d69-config-volume\") pod \"b453cc63-29be-455b-92a8-29793a6e6d69\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.305626 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b453cc63-29be-455b-92a8-29793a6e6d69-secret-volume\") pod \"b453cc63-29be-455b-92a8-29793a6e6d69\" (UID: \"b453cc63-29be-455b-92a8-29793a6e6d69\") " Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.306494 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b453cc63-29be-455b-92a8-29793a6e6d69-config-volume" (OuterVolumeSpecName: "config-volume") pod "b453cc63-29be-455b-92a8-29793a6e6d69" (UID: "b453cc63-29be-455b-92a8-29793a6e6d69"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.307706 4710 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b453cc63-29be-455b-92a8-29793a6e6d69-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.312160 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b453cc63-29be-455b-92a8-29793a6e6d69-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b453cc63-29be-455b-92a8-29793a6e6d69" (UID: "b453cc63-29be-455b-92a8-29793a6e6d69"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.313374 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b453cc63-29be-455b-92a8-29793a6e6d69-kube-api-access-gvvvq" (OuterVolumeSpecName: "kube-api-access-gvvvq") pod "b453cc63-29be-455b-92a8-29793a6e6d69" (UID: "b453cc63-29be-455b-92a8-29793a6e6d69"). InnerVolumeSpecName "kube-api-access-gvvvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.409727 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvvvq\" (UniqueName: \"kubernetes.io/projected/b453cc63-29be-455b-92a8-29793a6e6d69-kube-api-access-gvvvq\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.409792 4710 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b453cc63-29be-455b-92a8-29793a6e6d69-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.829241 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" event={"ID":"b453cc63-29be-455b-92a8-29793a6e6d69","Type":"ContainerDied","Data":"25e80001b20dbdc11c2cd276034a1c4206330572dc7714de88bdc97cafb204f7"} Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.829527 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25e80001b20dbdc11c2cd276034a1c4206330572dc7714de88bdc97cafb204f7" Nov 28 17:30:03 crc kubenswrapper[4710]: I1128 17:30:03.829317 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405850-5tzkr" Nov 28 17:30:12 crc kubenswrapper[4710]: I1128 17:30:12.054853 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xbg2v"] Nov 28 17:30:12 crc kubenswrapper[4710]: I1128 17:30:12.067580 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-xzg2r"] Nov 28 17:30:12 crc kubenswrapper[4710]: I1128 17:30:12.079150 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xbg2v"] Nov 28 17:30:12 crc kubenswrapper[4710]: I1128 17:30:12.090835 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-6103-account-create-update-5v27b"] Nov 28 17:30:12 crc kubenswrapper[4710]: I1128 17:30:12.102500 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-6103-account-create-update-5v27b"] Nov 28 17:30:12 crc kubenswrapper[4710]: I1128 17:30:12.114436 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-xzg2r"] Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.033046 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-jjfmr"] Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.044098 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e303-account-create-update-s2h9m"] Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.055100 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e303-account-create-update-s2h9m"] Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.064612 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-jjfmr"] Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.091474 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0b59-account-create-update-k7gbg"] Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.108079 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0b59-account-create-update-k7gbg"] Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.158264 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18dbb15b-b948-436e-8bf0-3800d84f58a3" path="/var/lib/kubelet/pods/18dbb15b-b948-436e-8bf0-3800d84f58a3/volumes" Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.159034 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="260df611-7b77-4b0d-b58a-beae48fe7e46" path="/var/lib/kubelet/pods/260df611-7b77-4b0d-b58a-beae48fe7e46/volumes" Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.159746 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b437e4-c7f2-4750-82e0-b75ab9bc0ea0" path="/var/lib/kubelet/pods/42b437e4-c7f2-4750-82e0-b75ab9bc0ea0/volumes" Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.160405 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c3704a-dd9e-4512-858a-e7de0883d025" path="/var/lib/kubelet/pods/43c3704a-dd9e-4512-858a-e7de0883d025/volumes" Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.161433 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e58811-7bf9-4bba-813d-d6267295e4da" path="/var/lib/kubelet/pods/63e58811-7bf9-4bba-813d-d6267295e4da/volumes" Nov 28 17:30:13 crc kubenswrapper[4710]: I1128 17:30:13.162045 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb564cd2-ed57-49d0-9b9a-a193e5f8418b" path="/var/lib/kubelet/pods/bb564cd2-ed57-49d0-9b9a-a193e5f8418b/volumes" Nov 28 17:30:27 crc kubenswrapper[4710]: I1128 17:30:27.060657 4710 generic.go:334] "Generic (PLEG): container finished" podID="762129bb-bd6f-46a3-87e5-38b37476e994" containerID="5b67b255b9093dae8ea78bfbd0225e0c416a210e1ec6ebdab848434c0e588f5b" exitCode=0 Nov 28 17:30:27 crc kubenswrapper[4710]: I1128 17:30:27.060832 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" event={"ID":"762129bb-bd6f-46a3-87e5-38b37476e994","Type":"ContainerDied","Data":"5b67b255b9093dae8ea78bfbd0225e0c416a210e1ec6ebdab848434c0e588f5b"} Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.517192 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.559705 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-inventory\") pod \"762129bb-bd6f-46a3-87e5-38b37476e994\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.559791 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46ctf\" (UniqueName: \"kubernetes.io/projected/762129bb-bd6f-46a3-87e5-38b37476e994-kube-api-access-46ctf\") pod \"762129bb-bd6f-46a3-87e5-38b37476e994\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.559974 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-ssh-key\") pod \"762129bb-bd6f-46a3-87e5-38b37476e994\" (UID: \"762129bb-bd6f-46a3-87e5-38b37476e994\") " Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.566284 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762129bb-bd6f-46a3-87e5-38b37476e994-kube-api-access-46ctf" (OuterVolumeSpecName: "kube-api-access-46ctf") pod "762129bb-bd6f-46a3-87e5-38b37476e994" (UID: "762129bb-bd6f-46a3-87e5-38b37476e994"). InnerVolumeSpecName "kube-api-access-46ctf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.607109 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "762129bb-bd6f-46a3-87e5-38b37476e994" (UID: "762129bb-bd6f-46a3-87e5-38b37476e994"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.608554 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-inventory" (OuterVolumeSpecName: "inventory") pod "762129bb-bd6f-46a3-87e5-38b37476e994" (UID: "762129bb-bd6f-46a3-87e5-38b37476e994"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.663195 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.663250 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46ctf\" (UniqueName: \"kubernetes.io/projected/762129bb-bd6f-46a3-87e5-38b37476e994-kube-api-access-46ctf\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:28 crc kubenswrapper[4710]: I1128 17:30:28.663269 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/762129bb-bd6f-46a3-87e5-38b37476e994-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.081317 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" event={"ID":"762129bb-bd6f-46a3-87e5-38b37476e994","Type":"ContainerDied","Data":"2944aac9349b368325dedbe287767f5984daab79e58ec4f825f2a3235ffcc28a"} Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.081357 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2944aac9349b368325dedbe287767f5984daab79e58ec4f825f2a3235ffcc28a" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.081359 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-gv2n2" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.175325 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9"] Nov 28 17:30:29 crc kubenswrapper[4710]: E1128 17:30:29.176487 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b453cc63-29be-455b-92a8-29793a6e6d69" containerName="collect-profiles" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.176535 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b453cc63-29be-455b-92a8-29793a6e6d69" containerName="collect-profiles" Nov 28 17:30:29 crc kubenswrapper[4710]: E1128 17:30:29.176556 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762129bb-bd6f-46a3-87e5-38b37476e994" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.176568 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="762129bb-bd6f-46a3-87e5-38b37476e994" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.177068 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="762129bb-bd6f-46a3-87e5-38b37476e994" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.177117 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="b453cc63-29be-455b-92a8-29793a6e6d69" containerName="collect-profiles" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.178058 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.182446 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.182792 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.182880 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.183245 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.191392 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9"] Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.279993 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.280190 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.280583 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58nxx\" (UniqueName: \"kubernetes.io/projected/8ea0b283-a909-4071-b414-acf02181dc0f-kube-api-access-58nxx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.382879 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58nxx\" (UniqueName: \"kubernetes.io/projected/8ea0b283-a909-4071-b414-acf02181dc0f-kube-api-access-58nxx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.383386 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.383480 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.388287 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.388589 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.400250 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58nxx\" (UniqueName: \"kubernetes.io/projected/8ea0b283-a909-4071-b414-acf02181dc0f-kube-api-access-58nxx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:29 crc kubenswrapper[4710]: I1128 17:30:29.498108 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:30:30 crc kubenswrapper[4710]: I1128 17:30:30.041660 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9"] Nov 28 17:30:30 crc kubenswrapper[4710]: I1128 17:30:30.092236 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" event={"ID":"8ea0b283-a909-4071-b414-acf02181dc0f","Type":"ContainerStarted","Data":"a1baddf7d2fbf2237a16a754b3adeac793fb17e765049a11c06ce957bad39e7c"} Nov 28 17:30:31 crc kubenswrapper[4710]: I1128 17:30:31.103129 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" event={"ID":"8ea0b283-a909-4071-b414-acf02181dc0f","Type":"ContainerStarted","Data":"25c8342431843fdb63ac3f4ada646926291bc5b6d182912d458b1e6465858266"} Nov 28 17:30:31 crc kubenswrapper[4710]: I1128 17:30:31.120968 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" podStartSLOduration=1.516487663 podStartE2EDuration="2.120950064s" podCreationTimestamp="2025-11-28 17:30:29 +0000 UTC" firstStartedPulling="2025-11-28 17:30:30.04808624 +0000 UTC m=+1919.306386275" lastFinishedPulling="2025-11-28 17:30:30.652548621 +0000 UTC m=+1919.910848676" observedRunningTime="2025-11-28 17:30:31.116600856 +0000 UTC m=+1920.374900901" watchObservedRunningTime="2025-11-28 17:30:31.120950064 +0000 UTC m=+1920.379250109" Nov 28 17:30:41 crc kubenswrapper[4710]: I1128 17:30:41.890215 4710 scope.go:117] "RemoveContainer" containerID="60dcd3bfbd2b7f73e2d10a4fbf69da0c4cad7c2d91052db0b43bff7a6fe46a66" Nov 28 17:30:41 crc kubenswrapper[4710]: I1128 17:30:41.922226 4710 scope.go:117] "RemoveContainer" containerID="10bdb3058bd37df4b9dfda52dfdc6b8f7b88074755c706636353da3539b356b2" Nov 28 17:30:41 crc kubenswrapper[4710]: I1128 17:30:41.981345 4710 scope.go:117] "RemoveContainer" containerID="ab51d1d5e5730b440bab928f2a8f2db91c8453c23d53e7a8682e2ca7b518f146" Nov 28 17:30:42 crc kubenswrapper[4710]: I1128 17:30:42.027695 4710 scope.go:117] "RemoveContainer" containerID="8bfd49f29c81fba223c6522d619b1404f21ba362b66c14b4f8c737baf938f6ac" Nov 28 17:30:42 crc kubenswrapper[4710]: I1128 17:30:42.091722 4710 scope.go:117] "RemoveContainer" containerID="86add492a92cc8d990205416a46151ef23eab33cbcca734c16ee56aa8e501119" Nov 28 17:30:42 crc kubenswrapper[4710]: I1128 17:30:42.126988 4710 scope.go:117] "RemoveContainer" containerID="8243f35943325f6a4d70ea6d32ccc7d37b0040c4f024b39e8fbac33b2331aa36" Nov 28 17:30:43 crc kubenswrapper[4710]: I1128 17:30:43.058030 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w7rm2"] Nov 28 17:30:43 crc kubenswrapper[4710]: I1128 17:30:43.069684 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w7rm2"] Nov 28 17:30:43 crc kubenswrapper[4710]: I1128 17:30:43.155348 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4894395-7727-4595-9a50-7a1b2b55a525" path="/var/lib/kubelet/pods/f4894395-7727-4595-9a50-7a1b2b55a525/volumes" Nov 28 17:31:01 crc kubenswrapper[4710]: I1128 17:31:01.045580 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-cgmdw"] Nov 28 17:31:01 crc kubenswrapper[4710]: I1128 17:31:01.057017 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-cgmdw"] Nov 28 17:31:01 crc kubenswrapper[4710]: I1128 17:31:01.154297 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b567f65-7af2-494a-9846-77428c466361" path="/var/lib/kubelet/pods/5b567f65-7af2-494a-9846-77428c466361/volumes" Nov 28 17:31:02 crc kubenswrapper[4710]: I1128 17:31:02.042841 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wmppk"] Nov 28 17:31:02 crc kubenswrapper[4710]: I1128 17:31:02.055832 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wmppk"] Nov 28 17:31:03 crc kubenswrapper[4710]: I1128 17:31:03.162767 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a74731a6-5583-442c-bbe9-67f586a1c383" path="/var/lib/kubelet/pods/a74731a6-5583-442c-bbe9-67f586a1c383/volumes" Nov 28 17:31:23 crc kubenswrapper[4710]: I1128 17:31:23.639383 4710 generic.go:334] "Generic (PLEG): container finished" podID="8ea0b283-a909-4071-b414-acf02181dc0f" containerID="25c8342431843fdb63ac3f4ada646926291bc5b6d182912d458b1e6465858266" exitCode=0 Nov 28 17:31:23 crc kubenswrapper[4710]: I1128 17:31:23.640002 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" event={"ID":"8ea0b283-a909-4071-b414-acf02181dc0f","Type":"ContainerDied","Data":"25c8342431843fdb63ac3f4ada646926291bc5b6d182912d458b1e6465858266"} Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.154821 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.224918 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-ssh-key\") pod \"8ea0b283-a909-4071-b414-acf02181dc0f\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.224991 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58nxx\" (UniqueName: \"kubernetes.io/projected/8ea0b283-a909-4071-b414-acf02181dc0f-kube-api-access-58nxx\") pod \"8ea0b283-a909-4071-b414-acf02181dc0f\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.225146 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-inventory\") pod \"8ea0b283-a909-4071-b414-acf02181dc0f\" (UID: \"8ea0b283-a909-4071-b414-acf02181dc0f\") " Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.230380 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ea0b283-a909-4071-b414-acf02181dc0f-kube-api-access-58nxx" (OuterVolumeSpecName: "kube-api-access-58nxx") pod "8ea0b283-a909-4071-b414-acf02181dc0f" (UID: "8ea0b283-a909-4071-b414-acf02181dc0f"). InnerVolumeSpecName "kube-api-access-58nxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.253879 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-inventory" (OuterVolumeSpecName: "inventory") pod "8ea0b283-a909-4071-b414-acf02181dc0f" (UID: "8ea0b283-a909-4071-b414-acf02181dc0f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.258287 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8ea0b283-a909-4071-b414-acf02181dc0f" (UID: "8ea0b283-a909-4071-b414-acf02181dc0f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.327669 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.327702 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8ea0b283-a909-4071-b414-acf02181dc0f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.327715 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58nxx\" (UniqueName: \"kubernetes.io/projected/8ea0b283-a909-4071-b414-acf02181dc0f-kube-api-access-58nxx\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.660725 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" event={"ID":"8ea0b283-a909-4071-b414-acf02181dc0f","Type":"ContainerDied","Data":"a1baddf7d2fbf2237a16a754b3adeac793fb17e765049a11c06ce957bad39e7c"} Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.660787 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1baddf7d2fbf2237a16a754b3adeac793fb17e765049a11c06ce957bad39e7c" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.660800 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.752545 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6dkfn"] Nov 28 17:31:25 crc kubenswrapper[4710]: E1128 17:31:25.753091 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ea0b283-a909-4071-b414-acf02181dc0f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.753114 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ea0b283-a909-4071-b414-acf02181dc0f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.753420 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ea0b283-a909-4071-b414-acf02181dc0f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.754382 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.759802 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.760058 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.760208 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.760892 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.775242 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6dkfn"] Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.836812 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj7pd\" (UniqueName: \"kubernetes.io/projected/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-kube-api-access-rj7pd\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.836875 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.836951 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.938595 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.938792 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj7pd\" (UniqueName: \"kubernetes.io/projected/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-kube-api-access-rj7pd\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.938850 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.944611 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.944626 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:25 crc kubenswrapper[4710]: I1128 17:31:25.957288 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj7pd\" (UniqueName: \"kubernetes.io/projected/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-kube-api-access-rj7pd\") pod \"ssh-known-hosts-edpm-deployment-6dkfn\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:26 crc kubenswrapper[4710]: I1128 17:31:26.075717 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:26 crc kubenswrapper[4710]: I1128 17:31:26.681687 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6dkfn"] Nov 28 17:31:27 crc kubenswrapper[4710]: I1128 17:31:27.680842 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" event={"ID":"fba211eb-e531-4ebb-941c-5bd4c61b9a3b","Type":"ContainerStarted","Data":"f95ebdb9fa546276a5af8fe8f8a38eb39cced953df615976e676d3fbf7b65725"} Nov 28 17:31:28 crc kubenswrapper[4710]: I1128 17:31:28.697850 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" event={"ID":"fba211eb-e531-4ebb-941c-5bd4c61b9a3b","Type":"ContainerStarted","Data":"cb199bbf4b46a29a5f002935b6418b82e70d9da51c32b531556c93d5d9d5d404"} Nov 28 17:31:28 crc kubenswrapper[4710]: I1128 17:31:28.728305 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" podStartSLOduration=2.843706212 podStartE2EDuration="3.728285004s" podCreationTimestamp="2025-11-28 17:31:25 +0000 UTC" firstStartedPulling="2025-11-28 17:31:26.685151164 +0000 UTC m=+1975.943451209" lastFinishedPulling="2025-11-28 17:31:27.569729956 +0000 UTC m=+1976.828030001" observedRunningTime="2025-11-28 17:31:28.7141834 +0000 UTC m=+1977.972483445" watchObservedRunningTime="2025-11-28 17:31:28.728285004 +0000 UTC m=+1977.986585049" Nov 28 17:31:35 crc kubenswrapper[4710]: I1128 17:31:35.776018 4710 generic.go:334] "Generic (PLEG): container finished" podID="fba211eb-e531-4ebb-941c-5bd4c61b9a3b" containerID="cb199bbf4b46a29a5f002935b6418b82e70d9da51c32b531556c93d5d9d5d404" exitCode=0 Nov 28 17:31:35 crc kubenswrapper[4710]: I1128 17:31:35.776126 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" event={"ID":"fba211eb-e531-4ebb-941c-5bd4c61b9a3b","Type":"ContainerDied","Data":"cb199bbf4b46a29a5f002935b6418b82e70d9da51c32b531556c93d5d9d5d404"} Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.258400 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.402443 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-ssh-key-openstack-edpm-ipam\") pod \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.402794 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-inventory-0\") pod \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.402859 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj7pd\" (UniqueName: \"kubernetes.io/projected/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-kube-api-access-rj7pd\") pod \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\" (UID: \"fba211eb-e531-4ebb-941c-5bd4c61b9a3b\") " Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.408902 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-kube-api-access-rj7pd" (OuterVolumeSpecName: "kube-api-access-rj7pd") pod "fba211eb-e531-4ebb-941c-5bd4c61b9a3b" (UID: "fba211eb-e531-4ebb-941c-5bd4c61b9a3b"). InnerVolumeSpecName "kube-api-access-rj7pd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.439447 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fba211eb-e531-4ebb-941c-5bd4c61b9a3b" (UID: "fba211eb-e531-4ebb-941c-5bd4c61b9a3b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.439915 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "fba211eb-e531-4ebb-941c-5bd4c61b9a3b" (UID: "fba211eb-e531-4ebb-941c-5bd4c61b9a3b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.511713 4710 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.511777 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj7pd\" (UniqueName: \"kubernetes.io/projected/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-kube-api-access-rj7pd\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.511796 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fba211eb-e531-4ebb-941c-5bd4c61b9a3b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.801372 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" event={"ID":"fba211eb-e531-4ebb-941c-5bd4c61b9a3b","Type":"ContainerDied","Data":"f95ebdb9fa546276a5af8fe8f8a38eb39cced953df615976e676d3fbf7b65725"} Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.801884 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f95ebdb9fa546276a5af8fe8f8a38eb39cced953df615976e676d3fbf7b65725" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.801713 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6dkfn" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.880303 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h"] Nov 28 17:31:37 crc kubenswrapper[4710]: E1128 17:31:37.880838 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba211eb-e531-4ebb-941c-5bd4c61b9a3b" containerName="ssh-known-hosts-edpm-deployment" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.880863 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba211eb-e531-4ebb-941c-5bd4c61b9a3b" containerName="ssh-known-hosts-edpm-deployment" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.881124 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba211eb-e531-4ebb-941c-5bd4c61b9a3b" containerName="ssh-known-hosts-edpm-deployment" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.882117 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.884321 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.884325 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.885009 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.886406 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:31:37 crc kubenswrapper[4710]: I1128 17:31:37.893322 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h"] Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.022891 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhc27\" (UniqueName: \"kubernetes.io/projected/05c25761-79e7-4b39-985a-16705cbb29ae-kube-api-access-bhc27\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.022988 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.023044 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.125173 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhc27\" (UniqueName: \"kubernetes.io/projected/05c25761-79e7-4b39-985a-16705cbb29ae-kube-api-access-bhc27\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.125265 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.125310 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.130704 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.132200 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.144546 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhc27\" (UniqueName: \"kubernetes.io/projected/05c25761-79e7-4b39-985a-16705cbb29ae-kube-api-access-bhc27\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-xkv6h\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.200104 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.754230 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h"] Nov 28 17:31:38 crc kubenswrapper[4710]: I1128 17:31:38.818783 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" event={"ID":"05c25761-79e7-4b39-985a-16705cbb29ae","Type":"ContainerStarted","Data":"26dc64753176dd254ad643ee113b058b3a1e183621aa05909890fcbc29d6a139"} Nov 28 17:31:39 crc kubenswrapper[4710]: I1128 17:31:39.837876 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" event={"ID":"05c25761-79e7-4b39-985a-16705cbb29ae","Type":"ContainerStarted","Data":"b9f386322de4138f0c340983c3c26559c9da4f07e1e53518df9c38d7bb5a022b"} Nov 28 17:31:39 crc kubenswrapper[4710]: I1128 17:31:39.873576 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" podStartSLOduration=2.370701281 podStartE2EDuration="2.873546893s" podCreationTimestamp="2025-11-28 17:31:37 +0000 UTC" firstStartedPulling="2025-11-28 17:31:38.760343383 +0000 UTC m=+1988.018643428" lastFinishedPulling="2025-11-28 17:31:39.263188995 +0000 UTC m=+1988.521489040" observedRunningTime="2025-11-28 17:31:39.8603759 +0000 UTC m=+1989.118675955" watchObservedRunningTime="2025-11-28 17:31:39.873546893 +0000 UTC m=+1989.131846968" Nov 28 17:31:42 crc kubenswrapper[4710]: I1128 17:31:42.335373 4710 scope.go:117] "RemoveContainer" containerID="bb2bcc47fbe2944b419896f2e544cbc104917f8407f96d430196b05c6f5e98b1" Nov 28 17:31:42 crc kubenswrapper[4710]: I1128 17:31:42.398449 4710 scope.go:117] "RemoveContainer" containerID="5cffd73c49b46daed424089209381ad85a40f3c3dd8bcb19bc5f2d9133477086" Nov 28 17:31:42 crc kubenswrapper[4710]: I1128 17:31:42.440998 4710 scope.go:117] "RemoveContainer" containerID="9f783ef69d5d54358a6df9c254d295ef46b1c24e4323166f8ccffa1c35419227" Nov 28 17:31:42 crc kubenswrapper[4710]: I1128 17:31:42.511334 4710 scope.go:117] "RemoveContainer" containerID="faa5fc465964f4b1fc77376cec30beb90d93458adb12ba94b524522dfcfd97d1" Nov 28 17:31:42 crc kubenswrapper[4710]: I1128 17:31:42.568628 4710 scope.go:117] "RemoveContainer" containerID="a0147473297e95a351cc7287c8ae9ce74d6452ba010ec45a6ee96cfadf2b0987" Nov 28 17:31:42 crc kubenswrapper[4710]: I1128 17:31:42.606575 4710 scope.go:117] "RemoveContainer" containerID="9237cdb97ae0db7e465dedd228d5d54f1a2a7fda7a9ae144f9fcf61c61b765ea" Nov 28 17:31:46 crc kubenswrapper[4710]: I1128 17:31:46.075474 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqppl"] Nov 28 17:31:46 crc kubenswrapper[4710]: I1128 17:31:46.091232 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-wqppl"] Nov 28 17:31:47 crc kubenswrapper[4710]: I1128 17:31:47.161480 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41c8bff-334a-4b57-bff0-c5716b30514c" path="/var/lib/kubelet/pods/e41c8bff-334a-4b57-bff0-c5716b30514c/volumes" Nov 28 17:31:48 crc kubenswrapper[4710]: I1128 17:31:48.942303 4710 generic.go:334] "Generic (PLEG): container finished" podID="05c25761-79e7-4b39-985a-16705cbb29ae" containerID="b9f386322de4138f0c340983c3c26559c9da4f07e1e53518df9c38d7bb5a022b" exitCode=0 Nov 28 17:31:48 crc kubenswrapper[4710]: I1128 17:31:48.942390 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" event={"ID":"05c25761-79e7-4b39-985a-16705cbb29ae","Type":"ContainerDied","Data":"b9f386322de4138f0c340983c3c26559c9da4f07e1e53518df9c38d7bb5a022b"} Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.406047 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.512412 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-inventory\") pod \"05c25761-79e7-4b39-985a-16705cbb29ae\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.512527 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-ssh-key\") pod \"05c25761-79e7-4b39-985a-16705cbb29ae\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.512698 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhc27\" (UniqueName: \"kubernetes.io/projected/05c25761-79e7-4b39-985a-16705cbb29ae-kube-api-access-bhc27\") pod \"05c25761-79e7-4b39-985a-16705cbb29ae\" (UID: \"05c25761-79e7-4b39-985a-16705cbb29ae\") " Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.543554 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c25761-79e7-4b39-985a-16705cbb29ae-kube-api-access-bhc27" (OuterVolumeSpecName: "kube-api-access-bhc27") pod "05c25761-79e7-4b39-985a-16705cbb29ae" (UID: "05c25761-79e7-4b39-985a-16705cbb29ae"). InnerVolumeSpecName "kube-api-access-bhc27". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.564417 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05c25761-79e7-4b39-985a-16705cbb29ae" (UID: "05c25761-79e7-4b39-985a-16705cbb29ae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.568085 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-inventory" (OuterVolumeSpecName: "inventory") pod "05c25761-79e7-4b39-985a-16705cbb29ae" (UID: "05c25761-79e7-4b39-985a-16705cbb29ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.634952 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.635027 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05c25761-79e7-4b39-985a-16705cbb29ae-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.635046 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhc27\" (UniqueName: \"kubernetes.io/projected/05c25761-79e7-4b39-985a-16705cbb29ae-kube-api-access-bhc27\") on node \"crc\" DevicePath \"\"" Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.966979 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" event={"ID":"05c25761-79e7-4b39-985a-16705cbb29ae","Type":"ContainerDied","Data":"26dc64753176dd254ad643ee113b058b3a1e183621aa05909890fcbc29d6a139"} Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.967301 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26dc64753176dd254ad643ee113b058b3a1e183621aa05909890fcbc29d6a139" Nov 28 17:31:50 crc kubenswrapper[4710]: I1128 17:31:50.967143 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-xkv6h" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.048080 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n"] Nov 28 17:31:51 crc kubenswrapper[4710]: E1128 17:31:51.048602 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c25761-79e7-4b39-985a-16705cbb29ae" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.048628 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c25761-79e7-4b39-985a-16705cbb29ae" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.048954 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c25761-79e7-4b39-985a-16705cbb29ae" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.049874 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.055243 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.055670 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.055708 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.056125 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.067051 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n"] Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.143652 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.143789 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t528j\" (UniqueName: \"kubernetes.io/projected/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-kube-api-access-t528j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.143873 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.246635 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.246918 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.247054 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t528j\" (UniqueName: \"kubernetes.io/projected/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-kube-api-access-t528j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.253592 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.253745 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.263314 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t528j\" (UniqueName: \"kubernetes.io/projected/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-kube-api-access-t528j\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.380168 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.919003 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n"] Nov 28 17:31:51 crc kubenswrapper[4710]: W1128 17:31:51.922411 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a1938e5_0e94_4679_a7d1_d9d9b45681c5.slice/crio-f0f758c9a9120a44488ced5f62ad8ee73690ab7736d8ff3e5ce56302a91aedef WatchSource:0}: Error finding container f0f758c9a9120a44488ced5f62ad8ee73690ab7736d8ff3e5ce56302a91aedef: Status 404 returned error can't find the container with id f0f758c9a9120a44488ced5f62ad8ee73690ab7736d8ff3e5ce56302a91aedef Nov 28 17:31:51 crc kubenswrapper[4710]: I1128 17:31:51.984472 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" event={"ID":"2a1938e5-0e94-4679-a7d1-d9d9b45681c5","Type":"ContainerStarted","Data":"f0f758c9a9120a44488ced5f62ad8ee73690ab7736d8ff3e5ce56302a91aedef"} Nov 28 17:31:52 crc kubenswrapper[4710]: I1128 17:31:52.999184 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" event={"ID":"2a1938e5-0e94-4679-a7d1-d9d9b45681c5","Type":"ContainerStarted","Data":"cf5cd419834b5bf0d5c1444ee8cdc987b88c648b491237e2829ffba2ea9c32be"} Nov 28 17:31:53 crc kubenswrapper[4710]: I1128 17:31:53.037929 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" podStartSLOduration=1.431521129 podStartE2EDuration="2.03790486s" podCreationTimestamp="2025-11-28 17:31:51 +0000 UTC" firstStartedPulling="2025-11-28 17:31:51.924389219 +0000 UTC m=+2001.182689264" lastFinishedPulling="2025-11-28 17:31:52.53077295 +0000 UTC m=+2001.789072995" observedRunningTime="2025-11-28 17:31:53.018355192 +0000 UTC m=+2002.276655297" watchObservedRunningTime="2025-11-28 17:31:53.03790486 +0000 UTC m=+2002.296204915" Nov 28 17:32:03 crc kubenswrapper[4710]: I1128 17:32:03.122563 4710 generic.go:334] "Generic (PLEG): container finished" podID="2a1938e5-0e94-4679-a7d1-d9d9b45681c5" containerID="cf5cd419834b5bf0d5c1444ee8cdc987b88c648b491237e2829ffba2ea9c32be" exitCode=0 Nov 28 17:32:03 crc kubenswrapper[4710]: I1128 17:32:03.122718 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" event={"ID":"2a1938e5-0e94-4679-a7d1-d9d9b45681c5","Type":"ContainerDied","Data":"cf5cd419834b5bf0d5c1444ee8cdc987b88c648b491237e2829ffba2ea9c32be"} Nov 28 17:32:04 crc kubenswrapper[4710]: I1128 17:32:04.742173 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:32:04 crc kubenswrapper[4710]: I1128 17:32:04.923640 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-ssh-key\") pod \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " Nov 28 17:32:04 crc kubenswrapper[4710]: I1128 17:32:04.923965 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-inventory\") pod \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " Nov 28 17:32:04 crc kubenswrapper[4710]: I1128 17:32:04.924168 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t528j\" (UniqueName: \"kubernetes.io/projected/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-kube-api-access-t528j\") pod \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\" (UID: \"2a1938e5-0e94-4679-a7d1-d9d9b45681c5\") " Nov 28 17:32:04 crc kubenswrapper[4710]: I1128 17:32:04.929589 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-kube-api-access-t528j" (OuterVolumeSpecName: "kube-api-access-t528j") pod "2a1938e5-0e94-4679-a7d1-d9d9b45681c5" (UID: "2a1938e5-0e94-4679-a7d1-d9d9b45681c5"). InnerVolumeSpecName "kube-api-access-t528j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:32:04 crc kubenswrapper[4710]: I1128 17:32:04.960374 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2a1938e5-0e94-4679-a7d1-d9d9b45681c5" (UID: "2a1938e5-0e94-4679-a7d1-d9d9b45681c5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:04 crc kubenswrapper[4710]: I1128 17:32:04.960935 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-inventory" (OuterVolumeSpecName: "inventory") pod "2a1938e5-0e94-4679-a7d1-d9d9b45681c5" (UID: "2a1938e5-0e94-4679-a7d1-d9d9b45681c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.027172 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.027241 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.027270 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t528j\" (UniqueName: \"kubernetes.io/projected/2a1938e5-0e94-4679-a7d1-d9d9b45681c5-kube-api-access-t528j\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.151685 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.165267 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n" event={"ID":"2a1938e5-0e94-4679-a7d1-d9d9b45681c5","Type":"ContainerDied","Data":"f0f758c9a9120a44488ced5f62ad8ee73690ab7736d8ff3e5ce56302a91aedef"} Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.165322 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0f758c9a9120a44488ced5f62ad8ee73690ab7736d8ff3e5ce56302a91aedef" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.272345 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn"] Nov 28 17:32:05 crc kubenswrapper[4710]: E1128 17:32:05.273319 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1938e5-0e94-4679-a7d1-d9d9b45681c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.273498 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1938e5-0e94-4679-a7d1-d9d9b45681c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.273965 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1938e5-0e94-4679-a7d1-d9d9b45681c5" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.276040 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.278529 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.279045 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.279393 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.279948 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.280057 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.280358 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.280545 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.280632 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.293563 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn"] Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.436980 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437064 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437096 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437128 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29rjv\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-kube-api-access-29rjv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437165 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437203 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437229 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437252 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437280 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437298 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437320 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437362 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437390 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.437418 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539505 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539568 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539597 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539630 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29rjv\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-kube-api-access-29rjv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539661 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539696 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539722 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539741 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539782 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539799 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539823 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539861 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539888 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.539913 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.547233 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.547665 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.547911 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.549155 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.549623 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.549927 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.549962 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.550645 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.551965 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.552338 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.552781 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.553027 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.556932 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.578566 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29rjv\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-kube-api-access-29rjv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:05 crc kubenswrapper[4710]: I1128 17:32:05.608796 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:06 crc kubenswrapper[4710]: W1128 17:32:06.172158 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d978938_7c7b_4b24_92a4_dda564a4d288.slice/crio-365ecc6607ed3c7356cbb81c6fb109ab64ed971b0e45e0639a9922d7951a210f WatchSource:0}: Error finding container 365ecc6607ed3c7356cbb81c6fb109ab64ed971b0e45e0639a9922d7951a210f: Status 404 returned error can't find the container with id 365ecc6607ed3c7356cbb81c6fb109ab64ed971b0e45e0639a9922d7951a210f Nov 28 17:32:06 crc kubenswrapper[4710]: I1128 17:32:06.181465 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn"] Nov 28 17:32:07 crc kubenswrapper[4710]: I1128 17:32:07.175810 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" event={"ID":"9d978938-7c7b-4b24-92a4-dda564a4d288","Type":"ContainerStarted","Data":"775bcd863bfd6741c1ca2e128a24d53f42820faad556f13138804f16f4d50e67"} Nov 28 17:32:07 crc kubenswrapper[4710]: I1128 17:32:07.176430 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" event={"ID":"9d978938-7c7b-4b24-92a4-dda564a4d288","Type":"ContainerStarted","Data":"365ecc6607ed3c7356cbb81c6fb109ab64ed971b0e45e0639a9922d7951a210f"} Nov 28 17:32:07 crc kubenswrapper[4710]: I1128 17:32:07.203243 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" podStartSLOduration=1.7356866210000002 podStartE2EDuration="2.203222448s" podCreationTimestamp="2025-11-28 17:32:05 +0000 UTC" firstStartedPulling="2025-11-28 17:32:06.174476413 +0000 UTC m=+2015.432776458" lastFinishedPulling="2025-11-28 17:32:06.64201223 +0000 UTC m=+2015.900312285" observedRunningTime="2025-11-28 17:32:07.195272373 +0000 UTC m=+2016.453572428" watchObservedRunningTime="2025-11-28 17:32:07.203222448 +0000 UTC m=+2016.461522503" Nov 28 17:32:13 crc kubenswrapper[4710]: I1128 17:32:13.345738 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:32:13 crc kubenswrapper[4710]: I1128 17:32:13.346919 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:32:42 crc kubenswrapper[4710]: I1128 17:32:42.758428 4710 scope.go:117] "RemoveContainer" containerID="0a3e7c38c956376398bb8781b3c9d480ed0bf396fce83fa6933948f55401c179" Nov 28 17:32:43 crc kubenswrapper[4710]: I1128 17:32:43.344061 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:32:43 crc kubenswrapper[4710]: I1128 17:32:43.344505 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:32:49 crc kubenswrapper[4710]: I1128 17:32:49.688945 4710 generic.go:334] "Generic (PLEG): container finished" podID="9d978938-7c7b-4b24-92a4-dda564a4d288" containerID="775bcd863bfd6741c1ca2e128a24d53f42820faad556f13138804f16f4d50e67" exitCode=0 Nov 28 17:32:49 crc kubenswrapper[4710]: I1128 17:32:49.689035 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" event={"ID":"9d978938-7c7b-4b24-92a4-dda564a4d288","Type":"ContainerDied","Data":"775bcd863bfd6741c1ca2e128a24d53f42820faad556f13138804f16f4d50e67"} Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.212620 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.329723 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.329795 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-libvirt-combined-ca-bundle\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.329859 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-inventory\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.329899 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-telemetry-combined-ca-bundle\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.329927 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ssh-key\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.329975 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-bootstrap-combined-ca-bundle\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.329992 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-repo-setup-combined-ca-bundle\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.330038 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.330076 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-neutron-metadata-combined-ca-bundle\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.330101 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29rjv\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-kube-api-access-29rjv\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.330144 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.330166 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-nova-combined-ca-bundle\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.330202 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ovn-combined-ca-bundle\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.330224 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-ovn-default-certs-0\") pod \"9d978938-7c7b-4b24-92a4-dda564a4d288\" (UID: \"9d978938-7c7b-4b24-92a4-dda564a4d288\") " Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.337381 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.338161 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.338706 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.340494 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.342221 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.344571 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.348073 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.349364 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.353164 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.353838 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-kube-api-access-29rjv" (OuterVolumeSpecName: "kube-api-access-29rjv") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "kube-api-access-29rjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.356780 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.358689 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.374608 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.384369 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-inventory" (OuterVolumeSpecName: "inventory") pod "9d978938-7c7b-4b24-92a4-dda564a4d288" (UID: "9d978938-7c7b-4b24-92a4-dda564a4d288"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434669 4710 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434713 4710 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434728 4710 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434739 4710 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434768 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434778 4710 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434787 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434800 4710 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434811 4710 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434820 4710 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434832 4710 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434842 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29rjv\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-kube-api-access-29rjv\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434852 4710 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/9d978938-7c7b-4b24-92a4-dda564a4d288-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.434861 4710 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d978938-7c7b-4b24-92a4-dda564a4d288-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.713467 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" event={"ID":"9d978938-7c7b-4b24-92a4-dda564a4d288","Type":"ContainerDied","Data":"365ecc6607ed3c7356cbb81c6fb109ab64ed971b0e45e0639a9922d7951a210f"} Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.714098 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="365ecc6607ed3c7356cbb81c6fb109ab64ed971b0e45e0639a9922d7951a210f" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.713715 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.814890 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs"] Nov 28 17:32:51 crc kubenswrapper[4710]: E1128 17:32:51.815459 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d978938-7c7b-4b24-92a4-dda564a4d288" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.815485 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d978938-7c7b-4b24-92a4-dda564a4d288" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.815750 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d978938-7c7b-4b24-92a4-dda564a4d288" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.816734 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.820833 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.820974 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.821007 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.821265 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.821437 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.831917 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs"] Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.945892 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.945977 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.946040 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.946073 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:51 crc kubenswrapper[4710]: I1128 17:32:51.946173 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp86k\" (UniqueName: \"kubernetes.io/projected/17b2ab0e-183f-433e-a79f-09d25daa2cd5-kube-api-access-fp86k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.048426 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.048567 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.048668 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.048732 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.048882 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp86k\" (UniqueName: \"kubernetes.io/projected/17b2ab0e-183f-433e-a79f-09d25daa2cd5-kube-api-access-fp86k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.049575 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.052476 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.052917 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.053213 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.064990 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp86k\" (UniqueName: \"kubernetes.io/projected/17b2ab0e-183f-433e-a79f-09d25daa2cd5-kube-api-access-fp86k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-np6vs\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.137291 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:32:52 crc kubenswrapper[4710]: I1128 17:32:52.734633 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs"] Nov 28 17:32:53 crc kubenswrapper[4710]: I1128 17:32:53.734785 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" event={"ID":"17b2ab0e-183f-433e-a79f-09d25daa2cd5","Type":"ContainerStarted","Data":"e431e6cec0935f34ad7eab3e3349b5003e0ea43e2bc99388052735e51f984cb2"} Nov 28 17:32:54 crc kubenswrapper[4710]: I1128 17:32:54.746070 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" event={"ID":"17b2ab0e-183f-433e-a79f-09d25daa2cd5","Type":"ContainerStarted","Data":"73a3e559228847719b7a27bf66609fad80051977345c02b5eb45cdfede525cdd"} Nov 28 17:32:54 crc kubenswrapper[4710]: I1128 17:32:54.778278 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" podStartSLOduration=3.034287845 podStartE2EDuration="3.778244338s" podCreationTimestamp="2025-11-28 17:32:51 +0000 UTC" firstStartedPulling="2025-11-28 17:32:52.751129283 +0000 UTC m=+2062.009429328" lastFinishedPulling="2025-11-28 17:32:53.495085776 +0000 UTC m=+2062.753385821" observedRunningTime="2025-11-28 17:32:54.762507843 +0000 UTC m=+2064.020807938" watchObservedRunningTime="2025-11-28 17:32:54.778244338 +0000 UTC m=+2064.036544413" Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.344335 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.346033 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.346145 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.347086 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"525474b05cc0e8cfad42d7334f3128ea31ed4f5fe6977e6899ad8e185ddc6855"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.347247 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://525474b05cc0e8cfad42d7334f3128ea31ed4f5fe6977e6899ad8e185ddc6855" gracePeriod=600 Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.932187 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="525474b05cc0e8cfad42d7334f3128ea31ed4f5fe6977e6899ad8e185ddc6855" exitCode=0 Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.932565 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"525474b05cc0e8cfad42d7334f3128ea31ed4f5fe6977e6899ad8e185ddc6855"} Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.932684 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080"} Nov 28 17:33:13 crc kubenswrapper[4710]: I1128 17:33:13.932707 4710 scope.go:117] "RemoveContainer" containerID="d4a775f2b5c0f55a7692a6ed8443030008ba18cc4b6ff3790bb6f6f8ecc77d33" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.424910 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tmxlj"] Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.428279 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.471019 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmxlj"] Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.583568 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-utilities\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.584045 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h44ck\" (UniqueName: \"kubernetes.io/projected/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-kube-api-access-h44ck\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.584144 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-catalog-content\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.686496 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-utilities\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.686617 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h44ck\" (UniqueName: \"kubernetes.io/projected/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-kube-api-access-h44ck\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.686679 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-catalog-content\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.687352 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-catalog-content\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.687941 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-utilities\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.719032 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h44ck\" (UniqueName: \"kubernetes.io/projected/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-kube-api-access-h44ck\") pod \"redhat-operators-tmxlj\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:16 crc kubenswrapper[4710]: I1128 17:33:16.771820 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:17 crc kubenswrapper[4710]: I1128 17:33:17.301507 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tmxlj"] Nov 28 17:33:17 crc kubenswrapper[4710]: I1128 17:33:17.991603 4710 generic.go:334] "Generic (PLEG): container finished" podID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerID="193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865" exitCode=0 Nov 28 17:33:17 crc kubenswrapper[4710]: I1128 17:33:17.991686 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmxlj" event={"ID":"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5","Type":"ContainerDied","Data":"193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865"} Nov 28 17:33:17 crc kubenswrapper[4710]: I1128 17:33:17.992332 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmxlj" event={"ID":"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5","Type":"ContainerStarted","Data":"da4441da6ee7d5e3d3d83ec38b2fa819a1f76d3144c52dbf059bd527f12e4e57"} Nov 28 17:33:19 crc kubenswrapper[4710]: I1128 17:33:19.004203 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmxlj" event={"ID":"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5","Type":"ContainerStarted","Data":"012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c"} Nov 28 17:33:22 crc kubenswrapper[4710]: I1128 17:33:22.040199 4710 generic.go:334] "Generic (PLEG): container finished" podID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerID="012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c" exitCode=0 Nov 28 17:33:22 crc kubenswrapper[4710]: I1128 17:33:22.040257 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmxlj" event={"ID":"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5","Type":"ContainerDied","Data":"012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c"} Nov 28 17:33:23 crc kubenswrapper[4710]: I1128 17:33:23.051776 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmxlj" event={"ID":"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5","Type":"ContainerStarted","Data":"94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03"} Nov 28 17:33:23 crc kubenswrapper[4710]: I1128 17:33:23.075096 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tmxlj" podStartSLOduration=2.322631579 podStartE2EDuration="7.075079602s" podCreationTimestamp="2025-11-28 17:33:16 +0000 UTC" firstStartedPulling="2025-11-28 17:33:17.995593197 +0000 UTC m=+2087.253893242" lastFinishedPulling="2025-11-28 17:33:22.74804122 +0000 UTC m=+2092.006341265" observedRunningTime="2025-11-28 17:33:23.067857959 +0000 UTC m=+2092.326158004" watchObservedRunningTime="2025-11-28 17:33:23.075079602 +0000 UTC m=+2092.333379647" Nov 28 17:33:26 crc kubenswrapper[4710]: I1128 17:33:26.772936 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:26 crc kubenswrapper[4710]: I1128 17:33:26.773522 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:27 crc kubenswrapper[4710]: I1128 17:33:27.832510 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tmxlj" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="registry-server" probeResult="failure" output=< Nov 28 17:33:27 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:33:27 crc kubenswrapper[4710]: > Nov 28 17:33:36 crc kubenswrapper[4710]: I1128 17:33:36.839388 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:36 crc kubenswrapper[4710]: I1128 17:33:36.905275 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:37 crc kubenswrapper[4710]: I1128 17:33:37.074836 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmxlj"] Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.217553 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tmxlj" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="registry-server" containerID="cri-o://94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03" gracePeriod=2 Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.751951 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.802752 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-utilities\") pod \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.802877 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h44ck\" (UniqueName: \"kubernetes.io/projected/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-kube-api-access-h44ck\") pod \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.802929 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-catalog-content\") pod \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\" (UID: \"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5\") " Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.803417 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-utilities" (OuterVolumeSpecName: "utilities") pod "f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" (UID: "f29b1dca-32b4-4694-81cb-2bb5d2e6dce5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.808175 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-kube-api-access-h44ck" (OuterVolumeSpecName: "kube-api-access-h44ck") pod "f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" (UID: "f29b1dca-32b4-4694-81cb-2bb5d2e6dce5"). InnerVolumeSpecName "kube-api-access-h44ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.900931 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" (UID: "f29b1dca-32b4-4694-81cb-2bb5d2e6dce5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.905905 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.905949 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h44ck\" (UniqueName: \"kubernetes.io/projected/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-kube-api-access-h44ck\") on node \"crc\" DevicePath \"\"" Nov 28 17:33:38 crc kubenswrapper[4710]: I1128 17:33:38.905966 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.233361 4710 generic.go:334] "Generic (PLEG): container finished" podID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerID="94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03" exitCode=0 Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.233429 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmxlj" event={"ID":"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5","Type":"ContainerDied","Data":"94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03"} Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.233450 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tmxlj" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.233483 4710 scope.go:117] "RemoveContainer" containerID="94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.233469 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tmxlj" event={"ID":"f29b1dca-32b4-4694-81cb-2bb5d2e6dce5","Type":"ContainerDied","Data":"da4441da6ee7d5e3d3d83ec38b2fa819a1f76d3144c52dbf059bd527f12e4e57"} Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.271871 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tmxlj"] Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.282478 4710 scope.go:117] "RemoveContainer" containerID="012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.295013 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tmxlj"] Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.312824 4710 scope.go:117] "RemoveContainer" containerID="193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.373671 4710 scope.go:117] "RemoveContainer" containerID="94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03" Nov 28 17:33:39 crc kubenswrapper[4710]: E1128 17:33:39.374501 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03\": container with ID starting with 94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03 not found: ID does not exist" containerID="94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.374558 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03"} err="failed to get container status \"94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03\": rpc error: code = NotFound desc = could not find container \"94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03\": container with ID starting with 94433c04f4f8ee45ee07daff9171069bf3da26da4b2d149de47aabcf424abc03 not found: ID does not exist" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.374587 4710 scope.go:117] "RemoveContainer" containerID="012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c" Nov 28 17:33:39 crc kubenswrapper[4710]: E1128 17:33:39.375215 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c\": container with ID starting with 012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c not found: ID does not exist" containerID="012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.375449 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c"} err="failed to get container status \"012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c\": rpc error: code = NotFound desc = could not find container \"012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c\": container with ID starting with 012c69c932b8b07b30a3d908f2e2d35e515fb132c9a768164dc6bdc766179d0c not found: ID does not exist" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.375972 4710 scope.go:117] "RemoveContainer" containerID="193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865" Nov 28 17:33:39 crc kubenswrapper[4710]: E1128 17:33:39.376646 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865\": container with ID starting with 193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865 not found: ID does not exist" containerID="193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865" Nov 28 17:33:39 crc kubenswrapper[4710]: I1128 17:33:39.376671 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865"} err="failed to get container status \"193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865\": rpc error: code = NotFound desc = could not find container \"193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865\": container with ID starting with 193fc98c92b473fef2c60719f28591dd37a0b74988946d736cf542f430335865 not found: ID does not exist" Nov 28 17:33:41 crc kubenswrapper[4710]: I1128 17:33:41.164029 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" path="/var/lib/kubelet/pods/f29b1dca-32b4-4694-81cb-2bb5d2e6dce5/volumes" Nov 28 17:34:02 crc kubenswrapper[4710]: I1128 17:34:02.481517 4710 generic.go:334] "Generic (PLEG): container finished" podID="17b2ab0e-183f-433e-a79f-09d25daa2cd5" containerID="73a3e559228847719b7a27bf66609fad80051977345c02b5eb45cdfede525cdd" exitCode=0 Nov 28 17:34:02 crc kubenswrapper[4710]: I1128 17:34:02.481604 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" event={"ID":"17b2ab0e-183f-433e-a79f-09d25daa2cd5","Type":"ContainerDied","Data":"73a3e559228847719b7a27bf66609fad80051977345c02b5eb45cdfede525cdd"} Nov 28 17:34:03 crc kubenswrapper[4710]: I1128 17:34:03.940436 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:34:03 crc kubenswrapper[4710]: I1128 17:34:03.989951 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovn-combined-ca-bundle\") pod \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " Nov 28 17:34:03 crc kubenswrapper[4710]: I1128 17:34:03.990133 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-inventory\") pod \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " Nov 28 17:34:03 crc kubenswrapper[4710]: I1128 17:34:03.990251 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ssh-key\") pod \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " Nov 28 17:34:03 crc kubenswrapper[4710]: I1128 17:34:03.990500 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp86k\" (UniqueName: \"kubernetes.io/projected/17b2ab0e-183f-433e-a79f-09d25daa2cd5-kube-api-access-fp86k\") pod \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " Nov 28 17:34:03 crc kubenswrapper[4710]: I1128 17:34:03.990561 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovncontroller-config-0\") pod \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\" (UID: \"17b2ab0e-183f-433e-a79f-09d25daa2cd5\") " Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.007279 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "17b2ab0e-183f-433e-a79f-09d25daa2cd5" (UID: "17b2ab0e-183f-433e-a79f-09d25daa2cd5"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.017102 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17b2ab0e-183f-433e-a79f-09d25daa2cd5-kube-api-access-fp86k" (OuterVolumeSpecName: "kube-api-access-fp86k") pod "17b2ab0e-183f-433e-a79f-09d25daa2cd5" (UID: "17b2ab0e-183f-433e-a79f-09d25daa2cd5"). InnerVolumeSpecName "kube-api-access-fp86k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.046559 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "17b2ab0e-183f-433e-a79f-09d25daa2cd5" (UID: "17b2ab0e-183f-433e-a79f-09d25daa2cd5"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.093636 4710 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.093665 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp86k\" (UniqueName: \"kubernetes.io/projected/17b2ab0e-183f-433e-a79f-09d25daa2cd5-kube-api-access-fp86k\") on node \"crc\" DevicePath \"\"" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.093676 4710 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.099318 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17b2ab0e-183f-433e-a79f-09d25daa2cd5" (UID: "17b2ab0e-183f-433e-a79f-09d25daa2cd5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.122579 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-inventory" (OuterVolumeSpecName: "inventory") pod "17b2ab0e-183f-433e-a79f-09d25daa2cd5" (UID: "17b2ab0e-183f-433e-a79f-09d25daa2cd5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.196573 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.196624 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17b2ab0e-183f-433e-a79f-09d25daa2cd5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.505398 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" event={"ID":"17b2ab0e-183f-433e-a79f-09d25daa2cd5","Type":"ContainerDied","Data":"e431e6cec0935f34ad7eab3e3349b5003e0ea43e2bc99388052735e51f984cb2"} Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.505440 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e431e6cec0935f34ad7eab3e3349b5003e0ea43e2bc99388052735e51f984cb2" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.505494 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-np6vs" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.613908 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv"] Nov 28 17:34:04 crc kubenswrapper[4710]: E1128 17:34:04.614468 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="registry-server" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.614492 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="registry-server" Nov 28 17:34:04 crc kubenswrapper[4710]: E1128 17:34:04.614511 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17b2ab0e-183f-433e-a79f-09d25daa2cd5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.614520 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="17b2ab0e-183f-433e-a79f-09d25daa2cd5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 17:34:04 crc kubenswrapper[4710]: E1128 17:34:04.614561 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="extract-utilities" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.614569 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="extract-utilities" Nov 28 17:34:04 crc kubenswrapper[4710]: E1128 17:34:04.614589 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="extract-content" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.614596 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="extract-content" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.614861 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29b1dca-32b4-4694-81cb-2bb5d2e6dce5" containerName="registry-server" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.614890 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="17b2ab0e-183f-433e-a79f-09d25daa2cd5" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.615813 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.618015 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.618104 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.618365 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.618545 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.624141 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.624395 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.636439 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv"] Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.707845 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28lzq\" (UniqueName: \"kubernetes.io/projected/915c1bf8-3797-4c5a-a991-45be0aab70b9-kube-api-access-28lzq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.707927 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.708017 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.708092 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.708125 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.708265 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.809853 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28lzq\" (UniqueName: \"kubernetes.io/projected/915c1bf8-3797-4c5a-a991-45be0aab70b9-kube-api-access-28lzq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.809924 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.809993 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.810055 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.810084 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.810182 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.814777 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.814921 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.815030 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.816224 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.816411 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.830343 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28lzq\" (UniqueName: \"kubernetes.io/projected/915c1bf8-3797-4c5a-a991-45be0aab70b9-kube-api-access-28lzq\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:04 crc kubenswrapper[4710]: I1128 17:34:04.938741 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:34:05 crc kubenswrapper[4710]: I1128 17:34:05.592804 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv"] Nov 28 17:34:06 crc kubenswrapper[4710]: I1128 17:34:06.532556 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" event={"ID":"915c1bf8-3797-4c5a-a991-45be0aab70b9","Type":"ContainerStarted","Data":"1158cbc08a43e30642c6eb4201c80aa0553f1caedb295b8447bc3b93818e936d"} Nov 28 17:34:06 crc kubenswrapper[4710]: I1128 17:34:06.533190 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" event={"ID":"915c1bf8-3797-4c5a-a991-45be0aab70b9","Type":"ContainerStarted","Data":"e28222a522b6ab087de9a5cd64150320f66bf5fb5df1be2237bffb87dc99569a"} Nov 28 17:34:06 crc kubenswrapper[4710]: I1128 17:34:06.554005 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" podStartSLOduration=2.078682196 podStartE2EDuration="2.553986233s" podCreationTimestamp="2025-11-28 17:34:04 +0000 UTC" firstStartedPulling="2025-11-28 17:34:05.608024388 +0000 UTC m=+2134.866324433" lastFinishedPulling="2025-11-28 17:34:06.083328415 +0000 UTC m=+2135.341628470" observedRunningTime="2025-11-28 17:34:06.55016753 +0000 UTC m=+2135.808467595" watchObservedRunningTime="2025-11-28 17:34:06.553986233 +0000 UTC m=+2135.812286278" Nov 28 17:34:59 crc kubenswrapper[4710]: I1128 17:34:59.092640 4710 generic.go:334] "Generic (PLEG): container finished" podID="915c1bf8-3797-4c5a-a991-45be0aab70b9" containerID="1158cbc08a43e30642c6eb4201c80aa0553f1caedb295b8447bc3b93818e936d" exitCode=0 Nov 28 17:34:59 crc kubenswrapper[4710]: I1128 17:34:59.092784 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" event={"ID":"915c1bf8-3797-4c5a-a991-45be0aab70b9","Type":"ContainerDied","Data":"1158cbc08a43e30642c6eb4201c80aa0553f1caedb295b8447bc3b93818e936d"} Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.577620 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.674019 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-metadata-combined-ca-bundle\") pod \"915c1bf8-3797-4c5a-a991-45be0aab70b9\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.674071 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-ssh-key\") pod \"915c1bf8-3797-4c5a-a991-45be0aab70b9\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.674120 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-nova-metadata-neutron-config-0\") pod \"915c1bf8-3797-4c5a-a991-45be0aab70b9\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.674256 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28lzq\" (UniqueName: \"kubernetes.io/projected/915c1bf8-3797-4c5a-a991-45be0aab70b9-kube-api-access-28lzq\") pod \"915c1bf8-3797-4c5a-a991-45be0aab70b9\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.674303 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-ovn-metadata-agent-neutron-config-0\") pod \"915c1bf8-3797-4c5a-a991-45be0aab70b9\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.674372 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-inventory\") pod \"915c1bf8-3797-4c5a-a991-45be0aab70b9\" (UID: \"915c1bf8-3797-4c5a-a991-45be0aab70b9\") " Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.681219 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915c1bf8-3797-4c5a-a991-45be0aab70b9-kube-api-access-28lzq" (OuterVolumeSpecName: "kube-api-access-28lzq") pod "915c1bf8-3797-4c5a-a991-45be0aab70b9" (UID: "915c1bf8-3797-4c5a-a991-45be0aab70b9"). InnerVolumeSpecName "kube-api-access-28lzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.684066 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "915c1bf8-3797-4c5a-a991-45be0aab70b9" (UID: "915c1bf8-3797-4c5a-a991-45be0aab70b9"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.705516 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "915c1bf8-3797-4c5a-a991-45be0aab70b9" (UID: "915c1bf8-3797-4c5a-a991-45be0aab70b9"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.705923 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "915c1bf8-3797-4c5a-a991-45be0aab70b9" (UID: "915c1bf8-3797-4c5a-a991-45be0aab70b9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.711022 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "915c1bf8-3797-4c5a-a991-45be0aab70b9" (UID: "915c1bf8-3797-4c5a-a991-45be0aab70b9"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.712462 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-inventory" (OuterVolumeSpecName: "inventory") pod "915c1bf8-3797-4c5a-a991-45be0aab70b9" (UID: "915c1bf8-3797-4c5a-a991-45be0aab70b9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.777230 4710 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.777265 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.777275 4710 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.777285 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28lzq\" (UniqueName: \"kubernetes.io/projected/915c1bf8-3797-4c5a-a991-45be0aab70b9-kube-api-access-28lzq\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.777298 4710 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:00 crc kubenswrapper[4710]: I1128 17:35:00.777309 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/915c1bf8-3797-4c5a-a991-45be0aab70b9-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.123869 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.127922 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv" event={"ID":"915c1bf8-3797-4c5a-a991-45be0aab70b9","Type":"ContainerDied","Data":"e28222a522b6ab087de9a5cd64150320f66bf5fb5df1be2237bffb87dc99569a"} Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.128006 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e28222a522b6ab087de9a5cd64150320f66bf5fb5df1be2237bffb87dc99569a" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.245090 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr"] Nov 28 17:35:01 crc kubenswrapper[4710]: E1128 17:35:01.245678 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915c1bf8-3797-4c5a-a991-45be0aab70b9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.245703 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="915c1bf8-3797-4c5a-a991-45be0aab70b9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.246054 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="915c1bf8-3797-4c5a-a991-45be0aab70b9" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.247000 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.250538 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.251117 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.251955 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.252176 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.254860 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.259681 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr"] Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.286413 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.286466 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st8ft\" (UniqueName: \"kubernetes.io/projected/04db3c20-a29b-4288-9ee7-4739e0796595-kube-api-access-st8ft\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.286570 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.286683 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.286800 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.388597 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.388952 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.388980 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st8ft\" (UniqueName: \"kubernetes.io/projected/04db3c20-a29b-4288-9ee7-4739e0796595-kube-api-access-st8ft\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.389057 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.389127 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.392691 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.393650 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.393969 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.394679 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.405560 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st8ft\" (UniqueName: \"kubernetes.io/projected/04db3c20-a29b-4288-9ee7-4739e0796595-kube-api-access-st8ft\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:01 crc kubenswrapper[4710]: I1128 17:35:01.568704 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:35:02 crc kubenswrapper[4710]: I1128 17:35:02.165803 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr"] Nov 28 17:35:02 crc kubenswrapper[4710]: I1128 17:35:02.170273 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:35:03 crc kubenswrapper[4710]: I1128 17:35:03.160565 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" event={"ID":"04db3c20-a29b-4288-9ee7-4739e0796595","Type":"ContainerStarted","Data":"66098f1bff19c8e92b8bcb0024e818104f3b8d12572ea9fec10812d749817c76"} Nov 28 17:35:03 crc kubenswrapper[4710]: I1128 17:35:03.161071 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" event={"ID":"04db3c20-a29b-4288-9ee7-4739e0796595","Type":"ContainerStarted","Data":"c5177296887f977e3b919e7c91faaf069b94191c03ecced82bad28b3418998d8"} Nov 28 17:35:03 crc kubenswrapper[4710]: I1128 17:35:03.181990 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" podStartSLOduration=1.60479237 podStartE2EDuration="2.181966392s" podCreationTimestamp="2025-11-28 17:35:01 +0000 UTC" firstStartedPulling="2025-11-28 17:35:02.169793298 +0000 UTC m=+2191.428093363" lastFinishedPulling="2025-11-28 17:35:02.74696734 +0000 UTC m=+2192.005267385" observedRunningTime="2025-11-28 17:35:03.170012377 +0000 UTC m=+2192.428312422" watchObservedRunningTime="2025-11-28 17:35:03.181966392 +0000 UTC m=+2192.440266437" Nov 28 17:35:13 crc kubenswrapper[4710]: I1128 17:35:13.343524 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:35:13 crc kubenswrapper[4710]: I1128 17:35:13.344201 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:35:43 crc kubenswrapper[4710]: I1128 17:35:43.343549 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:35:43 crc kubenswrapper[4710]: I1128 17:35:43.344357 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.024888 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2cbz"] Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.040038 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.042566 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2cbz"] Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.180088 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5feca2-f8e0-42d6-b11b-38a186ed4044-catalog-content\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.181012 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg4fc\" (UniqueName: \"kubernetes.io/projected/bd5feca2-f8e0-42d6-b11b-38a186ed4044-kube-api-access-wg4fc\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.181230 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5feca2-f8e0-42d6-b11b-38a186ed4044-utilities\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.283496 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5feca2-f8e0-42d6-b11b-38a186ed4044-catalog-content\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.283606 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg4fc\" (UniqueName: \"kubernetes.io/projected/bd5feca2-f8e0-42d6-b11b-38a186ed4044-kube-api-access-wg4fc\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.284048 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd5feca2-f8e0-42d6-b11b-38a186ed4044-catalog-content\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.284516 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5feca2-f8e0-42d6-b11b-38a186ed4044-utilities\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.285144 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd5feca2-f8e0-42d6-b11b-38a186ed4044-utilities\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.311418 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg4fc\" (UniqueName: \"kubernetes.io/projected/bd5feca2-f8e0-42d6-b11b-38a186ed4044-kube-api-access-wg4fc\") pod \"community-operators-x2cbz\" (UID: \"bd5feca2-f8e0-42d6-b11b-38a186ed4044\") " pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.370301 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:01 crc kubenswrapper[4710]: I1128 17:36:01.982456 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2cbz"] Nov 28 17:36:02 crc kubenswrapper[4710]: I1128 17:36:02.896837 4710 generic.go:334] "Generic (PLEG): container finished" podID="bd5feca2-f8e0-42d6-b11b-38a186ed4044" containerID="94d938d2a1e2b4ef3ec924f3bd459a5d09a695958671333f2d7b1f02a2dc78fa" exitCode=0 Nov 28 17:36:02 crc kubenswrapper[4710]: I1128 17:36:02.896858 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2cbz" event={"ID":"bd5feca2-f8e0-42d6-b11b-38a186ed4044","Type":"ContainerDied","Data":"94d938d2a1e2b4ef3ec924f3bd459a5d09a695958671333f2d7b1f02a2dc78fa"} Nov 28 17:36:02 crc kubenswrapper[4710]: I1128 17:36:02.897338 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2cbz" event={"ID":"bd5feca2-f8e0-42d6-b11b-38a186ed4044","Type":"ContainerStarted","Data":"b1141c85bf69c69c7ff1f6d2cb0b09dcc67370bdfd38344b333ace31bfcd103d"} Nov 28 17:36:07 crc kubenswrapper[4710]: I1128 17:36:07.955123 4710 generic.go:334] "Generic (PLEG): container finished" podID="bd5feca2-f8e0-42d6-b11b-38a186ed4044" containerID="afb07d3124f920b3623acef5c65ffc0f38225601ec85072c144d5e0791c340f9" exitCode=0 Nov 28 17:36:07 crc kubenswrapper[4710]: I1128 17:36:07.955200 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2cbz" event={"ID":"bd5feca2-f8e0-42d6-b11b-38a186ed4044","Type":"ContainerDied","Data":"afb07d3124f920b3623acef5c65ffc0f38225601ec85072c144d5e0791c340f9"} Nov 28 17:36:10 crc kubenswrapper[4710]: I1128 17:36:09.999035 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2cbz" event={"ID":"bd5feca2-f8e0-42d6-b11b-38a186ed4044","Type":"ContainerStarted","Data":"07d8c73e6077ebc4317173a070a1bb8d16f0b707cc6803de30995f550c5bb7cf"} Nov 28 17:36:10 crc kubenswrapper[4710]: I1128 17:36:10.032539 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2cbz" podStartSLOduration=4.167142747 podStartE2EDuration="10.032518938s" podCreationTimestamp="2025-11-28 17:36:00 +0000 UTC" firstStartedPulling="2025-11-28 17:36:02.899122453 +0000 UTC m=+2252.157422498" lastFinishedPulling="2025-11-28 17:36:08.764498644 +0000 UTC m=+2258.022798689" observedRunningTime="2025-11-28 17:36:10.018555332 +0000 UTC m=+2259.276855377" watchObservedRunningTime="2025-11-28 17:36:10.032518938 +0000 UTC m=+2259.290818993" Nov 28 17:36:11 crc kubenswrapper[4710]: I1128 17:36:11.371678 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:11 crc kubenswrapper[4710]: I1128 17:36:11.371996 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:11 crc kubenswrapper[4710]: I1128 17:36:11.417145 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.106223 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2cbz" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.261891 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2cbz"] Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.298812 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6wpc2"] Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.299091 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6wpc2" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" containerName="registry-server" containerID="cri-o://8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff" gracePeriod=2 Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.343934 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.343996 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.344227 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.345922 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.345990 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" gracePeriod=600 Nov 28 17:36:13 crc kubenswrapper[4710]: E1128 17:36:13.472132 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.810856 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.890229 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-utilities\") pod \"070ba80e-9b6b-4149-b0ac-a95183059050\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.890412 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-catalog-content\") pod \"070ba80e-9b6b-4149-b0ac-a95183059050\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.890705 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x425\" (UniqueName: \"kubernetes.io/projected/070ba80e-9b6b-4149-b0ac-a95183059050-kube-api-access-6x425\") pod \"070ba80e-9b6b-4149-b0ac-a95183059050\" (UID: \"070ba80e-9b6b-4149-b0ac-a95183059050\") " Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.891153 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-utilities" (OuterVolumeSpecName: "utilities") pod "070ba80e-9b6b-4149-b0ac-a95183059050" (UID: "070ba80e-9b6b-4149-b0ac-a95183059050"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.898794 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/070ba80e-9b6b-4149-b0ac-a95183059050-kube-api-access-6x425" (OuterVolumeSpecName: "kube-api-access-6x425") pod "070ba80e-9b6b-4149-b0ac-a95183059050" (UID: "070ba80e-9b6b-4149-b0ac-a95183059050"). InnerVolumeSpecName "kube-api-access-6x425". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.953165 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "070ba80e-9b6b-4149-b0ac-a95183059050" (UID: "070ba80e-9b6b-4149-b0ac-a95183059050"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.993638 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x425\" (UniqueName: \"kubernetes.io/projected/070ba80e-9b6b-4149-b0ac-a95183059050-kube-api-access-6x425\") on node \"crc\" DevicePath \"\"" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.993682 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:36:13 crc kubenswrapper[4710]: I1128 17:36:13.993695 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/070ba80e-9b6b-4149-b0ac-a95183059050-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.042400 4710 generic.go:334] "Generic (PLEG): container finished" podID="070ba80e-9b6b-4149-b0ac-a95183059050" containerID="8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff" exitCode=0 Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.042473 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wpc2" event={"ID":"070ba80e-9b6b-4149-b0ac-a95183059050","Type":"ContainerDied","Data":"8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff"} Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.042504 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6wpc2" event={"ID":"070ba80e-9b6b-4149-b0ac-a95183059050","Type":"ContainerDied","Data":"f6027bfb1cb12f10e3d7af318067c6779308709dbaaa9af9c215825db6a90384"} Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.042524 4710 scope.go:117] "RemoveContainer" containerID="8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.043087 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6wpc2" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.045863 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" exitCode=0 Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.045935 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080"} Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.046861 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:36:14 crc kubenswrapper[4710]: E1128 17:36:14.047183 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.064045 4710 scope.go:117] "RemoveContainer" containerID="17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.106905 4710 scope.go:117] "RemoveContainer" containerID="31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.130814 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6wpc2"] Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.135002 4710 scope.go:117] "RemoveContainer" containerID="8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff" Nov 28 17:36:14 crc kubenswrapper[4710]: E1128 17:36:14.148864 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff\": container with ID starting with 8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff not found: ID does not exist" containerID="8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.148912 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff"} err="failed to get container status \"8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff\": rpc error: code = NotFound desc = could not find container \"8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff\": container with ID starting with 8fd64913fbbb331a62b226f3f836cbc0296dd365a8fee649c555f2a7cbd197ff not found: ID does not exist" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.148939 4710 scope.go:117] "RemoveContainer" containerID="17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419" Nov 28 17:36:14 crc kubenswrapper[4710]: E1128 17:36:14.150355 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419\": container with ID starting with 17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419 not found: ID does not exist" containerID="17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.150380 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419"} err="failed to get container status \"17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419\": rpc error: code = NotFound desc = could not find container \"17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419\": container with ID starting with 17e4c846ba46bc70c189005c8edb7c5a23cedf57222ad11c9d4cda0180150419 not found: ID does not exist" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.150394 4710 scope.go:117] "RemoveContainer" containerID="31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9" Nov 28 17:36:14 crc kubenswrapper[4710]: E1128 17:36:14.150879 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9\": container with ID starting with 31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9 not found: ID does not exist" containerID="31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.150899 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9"} err="failed to get container status \"31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9\": rpc error: code = NotFound desc = could not find container \"31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9\": container with ID starting with 31341ca39e4e55ae60b7e907de1fa7736a2247bd5761ef7b0a7a6bee7f0c39e9 not found: ID does not exist" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.150914 4710 scope.go:117] "RemoveContainer" containerID="525474b05cc0e8cfad42d7334f3128ea31ed4f5fe6977e6899ad8e185ddc6855" Nov 28 17:36:14 crc kubenswrapper[4710]: I1128 17:36:14.156863 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6wpc2"] Nov 28 17:36:15 crc kubenswrapper[4710]: I1128 17:36:15.163254 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" path="/var/lib/kubelet/pods/070ba80e-9b6b-4149-b0ac-a95183059050/volumes" Nov 28 17:36:29 crc kubenswrapper[4710]: I1128 17:36:29.141811 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:36:29 crc kubenswrapper[4710]: E1128 17:36:29.142776 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:36:43 crc kubenswrapper[4710]: I1128 17:36:43.142517 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:36:43 crc kubenswrapper[4710]: E1128 17:36:43.143747 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:36:56 crc kubenswrapper[4710]: I1128 17:36:56.142409 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:36:56 crc kubenswrapper[4710]: E1128 17:36:56.143651 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:37:11 crc kubenswrapper[4710]: I1128 17:37:11.149125 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:37:11 crc kubenswrapper[4710]: E1128 17:37:11.150043 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:37:22 crc kubenswrapper[4710]: I1128 17:37:22.142396 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:37:22 crc kubenswrapper[4710]: E1128 17:37:22.143083 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:37:36 crc kubenswrapper[4710]: I1128 17:37:36.141826 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:37:36 crc kubenswrapper[4710]: E1128 17:37:36.142619 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:37:49 crc kubenswrapper[4710]: I1128 17:37:49.144012 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:37:49 crc kubenswrapper[4710]: E1128 17:37:49.144895 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:38:03 crc kubenswrapper[4710]: I1128 17:38:03.141899 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:38:03 crc kubenswrapper[4710]: E1128 17:38:03.143045 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:38:15 crc kubenswrapper[4710]: I1128 17:38:15.141902 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:38:15 crc kubenswrapper[4710]: E1128 17:38:15.142634 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:38:30 crc kubenswrapper[4710]: I1128 17:38:30.142101 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:38:30 crc kubenswrapper[4710]: E1128 17:38:30.143046 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:38:41 crc kubenswrapper[4710]: I1128 17:38:41.154144 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:38:41 crc kubenswrapper[4710]: E1128 17:38:41.155110 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:38:53 crc kubenswrapper[4710]: I1128 17:38:53.154056 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:38:53 crc kubenswrapper[4710]: E1128 17:38:53.155629 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:39:07 crc kubenswrapper[4710]: I1128 17:39:07.141968 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:39:07 crc kubenswrapper[4710]: E1128 17:39:07.142947 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:39:20 crc kubenswrapper[4710]: I1128 17:39:20.141452 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:39:20 crc kubenswrapper[4710]: E1128 17:39:20.142328 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:39:28 crc kubenswrapper[4710]: I1128 17:39:28.624396 4710 generic.go:334] "Generic (PLEG): container finished" podID="04db3c20-a29b-4288-9ee7-4739e0796595" containerID="66098f1bff19c8e92b8bcb0024e818104f3b8d12572ea9fec10812d749817c76" exitCode=0 Nov 28 17:39:28 crc kubenswrapper[4710]: I1128 17:39:28.624508 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" event={"ID":"04db3c20-a29b-4288-9ee7-4739e0796595","Type":"ContainerDied","Data":"66098f1bff19c8e92b8bcb0024e818104f3b8d12572ea9fec10812d749817c76"} Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.156736 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.256684 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-ssh-key\") pod \"04db3c20-a29b-4288-9ee7-4739e0796595\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.257163 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st8ft\" (UniqueName: \"kubernetes.io/projected/04db3c20-a29b-4288-9ee7-4739e0796595-kube-api-access-st8ft\") pod \"04db3c20-a29b-4288-9ee7-4739e0796595\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.257306 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-secret-0\") pod \"04db3c20-a29b-4288-9ee7-4739e0796595\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.257591 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-combined-ca-bundle\") pod \"04db3c20-a29b-4288-9ee7-4739e0796595\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.257700 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-inventory\") pod \"04db3c20-a29b-4288-9ee7-4739e0796595\" (UID: \"04db3c20-a29b-4288-9ee7-4739e0796595\") " Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.263290 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "04db3c20-a29b-4288-9ee7-4739e0796595" (UID: "04db3c20-a29b-4288-9ee7-4739e0796595"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.266031 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04db3c20-a29b-4288-9ee7-4739e0796595-kube-api-access-st8ft" (OuterVolumeSpecName: "kube-api-access-st8ft") pod "04db3c20-a29b-4288-9ee7-4739e0796595" (UID: "04db3c20-a29b-4288-9ee7-4739e0796595"). InnerVolumeSpecName "kube-api-access-st8ft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.295907 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-inventory" (OuterVolumeSpecName: "inventory") pod "04db3c20-a29b-4288-9ee7-4739e0796595" (UID: "04db3c20-a29b-4288-9ee7-4739e0796595"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.298268 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "04db3c20-a29b-4288-9ee7-4739e0796595" (UID: "04db3c20-a29b-4288-9ee7-4739e0796595"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.300830 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "04db3c20-a29b-4288-9ee7-4739e0796595" (UID: "04db3c20-a29b-4288-9ee7-4739e0796595"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.360409 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.360466 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st8ft\" (UniqueName: \"kubernetes.io/projected/04db3c20-a29b-4288-9ee7-4739e0796595-kube-api-access-st8ft\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.360486 4710 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.360506 4710 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.360523 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04db3c20-a29b-4288-9ee7-4739e0796595-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.645818 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" event={"ID":"04db3c20-a29b-4288-9ee7-4739e0796595","Type":"ContainerDied","Data":"c5177296887f977e3b919e7c91faaf069b94191c03ecced82bad28b3418998d8"} Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.645860 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.645866 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5177296887f977e3b919e7c91faaf069b94191c03ecced82bad28b3418998d8" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.738393 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v"] Nov 28 17:39:30 crc kubenswrapper[4710]: E1128 17:39:30.738981 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" containerName="extract-utilities" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.739003 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" containerName="extract-utilities" Nov 28 17:39:30 crc kubenswrapper[4710]: E1128 17:39:30.739043 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" containerName="extract-content" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.739050 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" containerName="extract-content" Nov 28 17:39:30 crc kubenswrapper[4710]: E1128 17:39:30.739058 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04db3c20-a29b-4288-9ee7-4739e0796595" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.739065 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="04db3c20-a29b-4288-9ee7-4739e0796595" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 17:39:30 crc kubenswrapper[4710]: E1128 17:39:30.739082 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" containerName="registry-server" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.739088 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" containerName="registry-server" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.739298 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="04db3c20-a29b-4288-9ee7-4739e0796595" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.739322 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="070ba80e-9b6b-4149-b0ac-a95183059050" containerName="registry-server" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.740150 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.744628 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.744892 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.745142 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.745307 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.745828 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.746489 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.746727 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.763583 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v"] Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.872703 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.872911 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.872943 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.872991 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.873023 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.873055 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv28n\" (UniqueName: \"kubernetes.io/projected/40b0849f-9e1d-4ced-83bd-af1db06a347c-kube-api-access-pv28n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.873093 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.873131 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.873167 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.975724 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv28n\" (UniqueName: \"kubernetes.io/projected/40b0849f-9e1d-4ced-83bd-af1db06a347c-kube-api-access-pv28n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.975817 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.975861 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.975900 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.975994 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.976109 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.976136 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.976179 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.976217 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.977800 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.980781 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.980835 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.980912 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.982030 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.982283 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.982714 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.983051 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:30 crc kubenswrapper[4710]: I1128 17:39:30.993540 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv28n\" (UniqueName: \"kubernetes.io/projected/40b0849f-9e1d-4ced-83bd-af1db06a347c-kube-api-access-pv28n\") pod \"nova-edpm-deployment-openstack-edpm-ipam-57c5v\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:31 crc kubenswrapper[4710]: I1128 17:39:31.062651 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:39:31 crc kubenswrapper[4710]: I1128 17:39:31.595059 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v"] Nov 28 17:39:31 crc kubenswrapper[4710]: W1128 17:39:31.595913 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40b0849f_9e1d_4ced_83bd_af1db06a347c.slice/crio-51e705c995f76102c3354753c4f8b16f734942c573be5339266d841ffea28592 WatchSource:0}: Error finding container 51e705c995f76102c3354753c4f8b16f734942c573be5339266d841ffea28592: Status 404 returned error can't find the container with id 51e705c995f76102c3354753c4f8b16f734942c573be5339266d841ffea28592 Nov 28 17:39:31 crc kubenswrapper[4710]: I1128 17:39:31.657039 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" event={"ID":"40b0849f-9e1d-4ced-83bd-af1db06a347c","Type":"ContainerStarted","Data":"51e705c995f76102c3354753c4f8b16f734942c573be5339266d841ffea28592"} Nov 28 17:39:32 crc kubenswrapper[4710]: I1128 17:39:32.064347 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:39:32 crc kubenswrapper[4710]: I1128 17:39:32.668009 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" event={"ID":"40b0849f-9e1d-4ced-83bd-af1db06a347c","Type":"ContainerStarted","Data":"0c4ab5b7bc93d4195e9396a5f5c6255a4b56d517614d762a952ff6b67492d677"} Nov 28 17:39:32 crc kubenswrapper[4710]: I1128 17:39:32.710638 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" podStartSLOduration=2.248453757 podStartE2EDuration="2.710614686s" podCreationTimestamp="2025-11-28 17:39:30 +0000 UTC" firstStartedPulling="2025-11-28 17:39:31.598373916 +0000 UTC m=+2460.856673961" lastFinishedPulling="2025-11-28 17:39:32.060534855 +0000 UTC m=+2461.318834890" observedRunningTime="2025-11-28 17:39:32.693680004 +0000 UTC m=+2461.951980049" watchObservedRunningTime="2025-11-28 17:39:32.710614686 +0000 UTC m=+2461.968914731" Nov 28 17:39:34 crc kubenswrapper[4710]: I1128 17:39:34.142500 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:39:34 crc kubenswrapper[4710]: E1128 17:39:34.143668 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:39:47 crc kubenswrapper[4710]: I1128 17:39:47.142130 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:39:47 crc kubenswrapper[4710]: E1128 17:39:47.143320 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:39:58 crc kubenswrapper[4710]: I1128 17:39:58.142157 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:39:58 crc kubenswrapper[4710]: E1128 17:39:58.143122 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.616127 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bn2qp"] Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.619632 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.633922 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn2qp"] Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.699126 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-catalog-content\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.699212 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-utilities\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.699396 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ljz\" (UniqueName: \"kubernetes.io/projected/407450b5-b77f-4b34-8827-c03f00144d1b-kube-api-access-m6ljz\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.801239 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-catalog-content\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.801559 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-utilities\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.801826 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-catalog-content\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.801950 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-utilities\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.802277 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ljz\" (UniqueName: \"kubernetes.io/projected/407450b5-b77f-4b34-8827-c03f00144d1b-kube-api-access-m6ljz\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.829830 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ljz\" (UniqueName: \"kubernetes.io/projected/407450b5-b77f-4b34-8827-c03f00144d1b-kube-api-access-m6ljz\") pod \"certified-operators-bn2qp\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:08 crc kubenswrapper[4710]: I1128 17:40:08.958189 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:09 crc kubenswrapper[4710]: I1128 17:40:09.488882 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bn2qp"] Nov 28 17:40:10 crc kubenswrapper[4710]: I1128 17:40:10.131556 4710 generic.go:334] "Generic (PLEG): container finished" podID="407450b5-b77f-4b34-8827-c03f00144d1b" containerID="9d6c7d656525d2bb0bf29db35409845a0bba69ae8b2f2a82c0d814c35da498f0" exitCode=0 Nov 28 17:40:10 crc kubenswrapper[4710]: I1128 17:40:10.131637 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2qp" event={"ID":"407450b5-b77f-4b34-8827-c03f00144d1b","Type":"ContainerDied","Data":"9d6c7d656525d2bb0bf29db35409845a0bba69ae8b2f2a82c0d814c35da498f0"} Nov 28 17:40:10 crc kubenswrapper[4710]: I1128 17:40:10.131891 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2qp" event={"ID":"407450b5-b77f-4b34-8827-c03f00144d1b","Type":"ContainerStarted","Data":"0bdf4fa901a68fdb34831dfd08adb158376a504971cfec2b911e192ea75fdd9e"} Nov 28 17:40:10 crc kubenswrapper[4710]: I1128 17:40:10.134320 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:40:11 crc kubenswrapper[4710]: I1128 17:40:11.151173 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:40:11 crc kubenswrapper[4710]: E1128 17:40:11.151775 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:40:11 crc kubenswrapper[4710]: I1128 17:40:11.155729 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2qp" event={"ID":"407450b5-b77f-4b34-8827-c03f00144d1b","Type":"ContainerStarted","Data":"e68c4f48e3f3b27fe1d4744401a47d0d8cae897e5188e75c1a59007d084fe674"} Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.200470 4710 generic.go:334] "Generic (PLEG): container finished" podID="407450b5-b77f-4b34-8827-c03f00144d1b" containerID="e68c4f48e3f3b27fe1d4744401a47d0d8cae897e5188e75c1a59007d084fe674" exitCode=0 Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.200842 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2qp" event={"ID":"407450b5-b77f-4b34-8827-c03f00144d1b","Type":"ContainerDied","Data":"e68c4f48e3f3b27fe1d4744401a47d0d8cae897e5188e75c1a59007d084fe674"} Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.230565 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8br2h"] Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.233391 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.253976 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8br2h"] Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.387230 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr48p\" (UniqueName: \"kubernetes.io/projected/dbb99088-6e14-4381-87b4-74b216c1ddea-kube-api-access-jr48p\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.387566 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-catalog-content\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.387815 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-utilities\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.489728 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-utilities\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.490128 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr48p\" (UniqueName: \"kubernetes.io/projected/dbb99088-6e14-4381-87b4-74b216c1ddea-kube-api-access-jr48p\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.490180 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-catalog-content\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.490323 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-utilities\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.490658 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-catalog-content\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.514752 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr48p\" (UniqueName: \"kubernetes.io/projected/dbb99088-6e14-4381-87b4-74b216c1ddea-kube-api-access-jr48p\") pod \"redhat-marketplace-8br2h\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:12 crc kubenswrapper[4710]: I1128 17:40:12.562926 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:13 crc kubenswrapper[4710]: I1128 17:40:13.099134 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8br2h"] Nov 28 17:40:13 crc kubenswrapper[4710]: I1128 17:40:13.217345 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8br2h" event={"ID":"dbb99088-6e14-4381-87b4-74b216c1ddea","Type":"ContainerStarted","Data":"f53c6192911c6861a4cc81939a649933721b4d1d63e7698031d3827c8ecffe56"} Nov 28 17:40:13 crc kubenswrapper[4710]: I1128 17:40:13.223485 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2qp" event={"ID":"407450b5-b77f-4b34-8827-c03f00144d1b","Type":"ContainerStarted","Data":"860a2c50f1e92c8aa4b526dd2ec8a920b5b03b2693be55537bfbc37cd3e71a21"} Nov 28 17:40:13 crc kubenswrapper[4710]: I1128 17:40:13.245619 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bn2qp" podStartSLOduration=2.4822913 podStartE2EDuration="5.24560016s" podCreationTimestamp="2025-11-28 17:40:08 +0000 UTC" firstStartedPulling="2025-11-28 17:40:10.134036026 +0000 UTC m=+2499.392336071" lastFinishedPulling="2025-11-28 17:40:12.897344896 +0000 UTC m=+2502.155644931" observedRunningTime="2025-11-28 17:40:13.243598827 +0000 UTC m=+2502.501898872" watchObservedRunningTime="2025-11-28 17:40:13.24560016 +0000 UTC m=+2502.503900205" Nov 28 17:40:14 crc kubenswrapper[4710]: I1128 17:40:14.237027 4710 generic.go:334] "Generic (PLEG): container finished" podID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerID="8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878" exitCode=0 Nov 28 17:40:14 crc kubenswrapper[4710]: I1128 17:40:14.237416 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8br2h" event={"ID":"dbb99088-6e14-4381-87b4-74b216c1ddea","Type":"ContainerDied","Data":"8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878"} Nov 28 17:40:15 crc kubenswrapper[4710]: I1128 17:40:15.249120 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8br2h" event={"ID":"dbb99088-6e14-4381-87b4-74b216c1ddea","Type":"ContainerStarted","Data":"66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d"} Nov 28 17:40:16 crc kubenswrapper[4710]: I1128 17:40:16.263521 4710 generic.go:334] "Generic (PLEG): container finished" podID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerID="66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d" exitCode=0 Nov 28 17:40:16 crc kubenswrapper[4710]: I1128 17:40:16.263584 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8br2h" event={"ID":"dbb99088-6e14-4381-87b4-74b216c1ddea","Type":"ContainerDied","Data":"66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d"} Nov 28 17:40:17 crc kubenswrapper[4710]: I1128 17:40:17.304334 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8br2h" podStartSLOduration=2.4607873639999998 podStartE2EDuration="5.304317481s" podCreationTimestamp="2025-11-28 17:40:12 +0000 UTC" firstStartedPulling="2025-11-28 17:40:14.239180183 +0000 UTC m=+2503.497480248" lastFinishedPulling="2025-11-28 17:40:17.08271032 +0000 UTC m=+2506.341010365" observedRunningTime="2025-11-28 17:40:17.297108361 +0000 UTC m=+2506.555408406" watchObservedRunningTime="2025-11-28 17:40:17.304317481 +0000 UTC m=+2506.562617526" Nov 28 17:40:18 crc kubenswrapper[4710]: I1128 17:40:18.295088 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8br2h" event={"ID":"dbb99088-6e14-4381-87b4-74b216c1ddea","Type":"ContainerStarted","Data":"b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25"} Nov 28 17:40:18 crc kubenswrapper[4710]: I1128 17:40:18.958928 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:18 crc kubenswrapper[4710]: I1128 17:40:18.959604 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:19 crc kubenswrapper[4710]: I1128 17:40:19.007188 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:19 crc kubenswrapper[4710]: I1128 17:40:19.347809 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:22 crc kubenswrapper[4710]: I1128 17:40:22.395505 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn2qp"] Nov 28 17:40:22 crc kubenswrapper[4710]: I1128 17:40:22.397346 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bn2qp" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" containerName="registry-server" containerID="cri-o://860a2c50f1e92c8aa4b526dd2ec8a920b5b03b2693be55537bfbc37cd3e71a21" gracePeriod=2 Nov 28 17:40:22 crc kubenswrapper[4710]: I1128 17:40:22.563443 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:22 crc kubenswrapper[4710]: I1128 17:40:22.563980 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:22 crc kubenswrapper[4710]: I1128 17:40:22.614191 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.358103 4710 generic.go:334] "Generic (PLEG): container finished" podID="407450b5-b77f-4b34-8827-c03f00144d1b" containerID="860a2c50f1e92c8aa4b526dd2ec8a920b5b03b2693be55537bfbc37cd3e71a21" exitCode=0 Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.358193 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2qp" event={"ID":"407450b5-b77f-4b34-8827-c03f00144d1b","Type":"ContainerDied","Data":"860a2c50f1e92c8aa4b526dd2ec8a920b5b03b2693be55537bfbc37cd3e71a21"} Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.358546 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bn2qp" event={"ID":"407450b5-b77f-4b34-8827-c03f00144d1b","Type":"ContainerDied","Data":"0bdf4fa901a68fdb34831dfd08adb158376a504971cfec2b911e192ea75fdd9e"} Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.358573 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bdf4fa901a68fdb34831dfd08adb158376a504971cfec2b911e192ea75fdd9e" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.444558 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.445943 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.561205 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6ljz\" (UniqueName: \"kubernetes.io/projected/407450b5-b77f-4b34-8827-c03f00144d1b-kube-api-access-m6ljz\") pod \"407450b5-b77f-4b34-8827-c03f00144d1b\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.561980 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-catalog-content\") pod \"407450b5-b77f-4b34-8827-c03f00144d1b\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.562189 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-utilities\") pod \"407450b5-b77f-4b34-8827-c03f00144d1b\" (UID: \"407450b5-b77f-4b34-8827-c03f00144d1b\") " Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.563810 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-utilities" (OuterVolumeSpecName: "utilities") pod "407450b5-b77f-4b34-8827-c03f00144d1b" (UID: "407450b5-b77f-4b34-8827-c03f00144d1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.572388 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407450b5-b77f-4b34-8827-c03f00144d1b-kube-api-access-m6ljz" (OuterVolumeSpecName: "kube-api-access-m6ljz") pod "407450b5-b77f-4b34-8827-c03f00144d1b" (UID: "407450b5-b77f-4b34-8827-c03f00144d1b"). InnerVolumeSpecName "kube-api-access-m6ljz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.622458 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "407450b5-b77f-4b34-8827-c03f00144d1b" (UID: "407450b5-b77f-4b34-8827-c03f00144d1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.664521 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6ljz\" (UniqueName: \"kubernetes.io/projected/407450b5-b77f-4b34-8827-c03f00144d1b-kube-api-access-m6ljz\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.664550 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:23 crc kubenswrapper[4710]: I1128 17:40:23.664560 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/407450b5-b77f-4b34-8827-c03f00144d1b-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:24 crc kubenswrapper[4710]: I1128 17:40:24.367956 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bn2qp" Nov 28 17:40:24 crc kubenswrapper[4710]: I1128 17:40:24.441188 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bn2qp"] Nov 28 17:40:24 crc kubenswrapper[4710]: I1128 17:40:24.454021 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bn2qp"] Nov 28 17:40:25 crc kubenswrapper[4710]: I1128 17:40:25.090399 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8br2h"] Nov 28 17:40:25 crc kubenswrapper[4710]: I1128 17:40:25.142155 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:40:25 crc kubenswrapper[4710]: E1128 17:40:25.142490 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:40:25 crc kubenswrapper[4710]: I1128 17:40:25.152777 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" path="/var/lib/kubelet/pods/407450b5-b77f-4b34-8827-c03f00144d1b/volumes" Nov 28 17:40:26 crc kubenswrapper[4710]: I1128 17:40:26.383731 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8br2h" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerName="registry-server" containerID="cri-o://b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25" gracePeriod=2 Nov 28 17:40:26 crc kubenswrapper[4710]: I1128 17:40:26.897791 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.029836 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-catalog-content\") pod \"dbb99088-6e14-4381-87b4-74b216c1ddea\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.030106 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr48p\" (UniqueName: \"kubernetes.io/projected/dbb99088-6e14-4381-87b4-74b216c1ddea-kube-api-access-jr48p\") pod \"dbb99088-6e14-4381-87b4-74b216c1ddea\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.030257 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-utilities\") pod \"dbb99088-6e14-4381-87b4-74b216c1ddea\" (UID: \"dbb99088-6e14-4381-87b4-74b216c1ddea\") " Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.031720 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-utilities" (OuterVolumeSpecName: "utilities") pod "dbb99088-6e14-4381-87b4-74b216c1ddea" (UID: "dbb99088-6e14-4381-87b4-74b216c1ddea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.038513 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb99088-6e14-4381-87b4-74b216c1ddea-kube-api-access-jr48p" (OuterVolumeSpecName: "kube-api-access-jr48p") pod "dbb99088-6e14-4381-87b4-74b216c1ddea" (UID: "dbb99088-6e14-4381-87b4-74b216c1ddea"). InnerVolumeSpecName "kube-api-access-jr48p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.055578 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dbb99088-6e14-4381-87b4-74b216c1ddea" (UID: "dbb99088-6e14-4381-87b4-74b216c1ddea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.132532 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.132749 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dbb99088-6e14-4381-87b4-74b216c1ddea-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.132832 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr48p\" (UniqueName: \"kubernetes.io/projected/dbb99088-6e14-4381-87b4-74b216c1ddea-kube-api-access-jr48p\") on node \"crc\" DevicePath \"\"" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.395493 4710 generic.go:334] "Generic (PLEG): container finished" podID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerID="b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25" exitCode=0 Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.395551 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8br2h" event={"ID":"dbb99088-6e14-4381-87b4-74b216c1ddea","Type":"ContainerDied","Data":"b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25"} Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.395959 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8br2h" event={"ID":"dbb99088-6e14-4381-87b4-74b216c1ddea","Type":"ContainerDied","Data":"f53c6192911c6861a4cc81939a649933721b4d1d63e7698031d3827c8ecffe56"} Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.395982 4710 scope.go:117] "RemoveContainer" containerID="b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.395584 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8br2h" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.419708 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8br2h"] Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.425531 4710 scope.go:117] "RemoveContainer" containerID="66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.430594 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8br2h"] Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.450388 4710 scope.go:117] "RemoveContainer" containerID="8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.496104 4710 scope.go:117] "RemoveContainer" containerID="b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25" Nov 28 17:40:27 crc kubenswrapper[4710]: E1128 17:40:27.496599 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25\": container with ID starting with b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25 not found: ID does not exist" containerID="b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.496637 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25"} err="failed to get container status \"b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25\": rpc error: code = NotFound desc = could not find container \"b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25\": container with ID starting with b2248bbb04e1c011e7b7d75140a11531add784dd918342d525db75b9b3e26b25 not found: ID does not exist" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.496659 4710 scope.go:117] "RemoveContainer" containerID="66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d" Nov 28 17:40:27 crc kubenswrapper[4710]: E1128 17:40:27.497145 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d\": container with ID starting with 66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d not found: ID does not exist" containerID="66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.497304 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d"} err="failed to get container status \"66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d\": rpc error: code = NotFound desc = could not find container \"66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d\": container with ID starting with 66cbd2be5caa7ee8e806c98201b17c84be22ae52eb2f652e9f0b90a8d9b7ea4d not found: ID does not exist" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.497402 4710 scope.go:117] "RemoveContainer" containerID="8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878" Nov 28 17:40:27 crc kubenswrapper[4710]: E1128 17:40:27.497803 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878\": container with ID starting with 8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878 not found: ID does not exist" containerID="8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878" Nov 28 17:40:27 crc kubenswrapper[4710]: I1128 17:40:27.497835 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878"} err="failed to get container status \"8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878\": rpc error: code = NotFound desc = could not find container \"8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878\": container with ID starting with 8b999418d3724edb50601ab3d2aed31b29cb5b773a66f856e58211b9c69b0878 not found: ID does not exist" Nov 28 17:40:29 crc kubenswrapper[4710]: I1128 17:40:29.153259 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" path="/var/lib/kubelet/pods/dbb99088-6e14-4381-87b4-74b216c1ddea/volumes" Nov 28 17:40:40 crc kubenswrapper[4710]: I1128 17:40:40.144072 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:40:40 crc kubenswrapper[4710]: E1128 17:40:40.144932 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:40:54 crc kubenswrapper[4710]: I1128 17:40:54.142339 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:40:54 crc kubenswrapper[4710]: E1128 17:40:54.143082 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:41:06 crc kubenswrapper[4710]: I1128 17:41:06.141979 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:41:06 crc kubenswrapper[4710]: E1128 17:41:06.142748 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:41:18 crc kubenswrapper[4710]: I1128 17:41:18.141909 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:41:18 crc kubenswrapper[4710]: I1128 17:41:18.983925 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"eee63a2fe472ec7898194cd95ff06f894330d78d08bb63d109cdb16983d45005"} Nov 28 17:42:27 crc kubenswrapper[4710]: I1128 17:42:27.738336 4710 generic.go:334] "Generic (PLEG): container finished" podID="40b0849f-9e1d-4ced-83bd-af1db06a347c" containerID="0c4ab5b7bc93d4195e9396a5f5c6255a4b56d517614d762a952ff6b67492d677" exitCode=0 Nov 28 17:42:27 crc kubenswrapper[4710]: I1128 17:42:27.738396 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" event={"ID":"40b0849f-9e1d-4ced-83bd-af1db06a347c","Type":"ContainerDied","Data":"0c4ab5b7bc93d4195e9396a5f5c6255a4b56d517614d762a952ff6b67492d677"} Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.374573 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.554953 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-1\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.555046 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-inventory\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.555107 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-0\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.555136 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-ssh-key\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.555210 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv28n\" (UniqueName: \"kubernetes.io/projected/40b0849f-9e1d-4ced-83bd-af1db06a347c-kube-api-access-pv28n\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.555235 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-1\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.555348 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-0\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.555395 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-extra-config-0\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.555433 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-combined-ca-bundle\") pod \"40b0849f-9e1d-4ced-83bd-af1db06a347c\" (UID: \"40b0849f-9e1d-4ced-83bd-af1db06a347c\") " Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.562781 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.566507 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40b0849f-9e1d-4ced-83bd-af1db06a347c-kube-api-access-pv28n" (OuterVolumeSpecName: "kube-api-access-pv28n") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "kube-api-access-pv28n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.590225 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.590308 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.592215 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.600203 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-inventory" (OuterVolumeSpecName: "inventory") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.600286 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.603700 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.624948 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "40b0849f-9e1d-4ced-83bd-af1db06a347c" (UID: "40b0849f-9e1d-4ced-83bd-af1db06a347c"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659479 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv28n\" (UniqueName: \"kubernetes.io/projected/40b0849f-9e1d-4ced-83bd-af1db06a347c-kube-api-access-pv28n\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659512 4710 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659522 4710 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659532 4710 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659547 4710 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659558 4710 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659570 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659582 4710 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.659593 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/40b0849f-9e1d-4ced-83bd-af1db06a347c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.762818 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" event={"ID":"40b0849f-9e1d-4ced-83bd-af1db06a347c","Type":"ContainerDied","Data":"51e705c995f76102c3354753c4f8b16f734942c573be5339266d841ffea28592"} Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.762861 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e705c995f76102c3354753c4f8b16f734942c573be5339266d841ffea28592" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.762901 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-57c5v" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.853638 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7"] Nov 28 17:42:29 crc kubenswrapper[4710]: E1128 17:42:29.854596 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40b0849f-9e1d-4ced-83bd-af1db06a347c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.854679 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="40b0849f-9e1d-4ced-83bd-af1db06a347c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 17:42:29 crc kubenswrapper[4710]: E1128 17:42:29.854806 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerName="extract-utilities" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.854894 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerName="extract-utilities" Nov 28 17:42:29 crc kubenswrapper[4710]: E1128 17:42:29.854959 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" containerName="extract-content" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.855011 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" containerName="extract-content" Nov 28 17:42:29 crc kubenswrapper[4710]: E1128 17:42:29.855077 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerName="extract-content" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.855128 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerName="extract-content" Nov 28 17:42:29 crc kubenswrapper[4710]: E1128 17:42:29.855190 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerName="registry-server" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.855241 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerName="registry-server" Nov 28 17:42:29 crc kubenswrapper[4710]: E1128 17:42:29.855311 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" containerName="extract-utilities" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.855362 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" containerName="extract-utilities" Nov 28 17:42:29 crc kubenswrapper[4710]: E1128 17:42:29.855417 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" containerName="registry-server" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.855484 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" containerName="registry-server" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.855872 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb99088-6e14-4381-87b4-74b216c1ddea" containerName="registry-server" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.855960 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="40b0849f-9e1d-4ced-83bd-af1db06a347c" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.856044 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="407450b5-b77f-4b34-8827-c03f00144d1b" containerName="registry-server" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.857063 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.859396 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.859637 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.859667 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.859826 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.859945 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.869430 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7"] Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.965582 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mhr5\" (UniqueName: \"kubernetes.io/projected/6713c8fc-ccd2-4956-8102-4d888af17897-kube-api-access-2mhr5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.965646 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.965796 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.965871 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.965895 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.965920 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:29 crc kubenswrapper[4710]: I1128 17:42:29.965951 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.068400 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mhr5\" (UniqueName: \"kubernetes.io/projected/6713c8fc-ccd2-4956-8102-4d888af17897-kube-api-access-2mhr5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.068460 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.068617 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.068703 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.068728 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.068773 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.068806 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.072635 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.073037 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.073181 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.073250 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.074194 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.075206 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.091560 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mhr5\" (UniqueName: \"kubernetes.io/projected/6713c8fc-ccd2-4956-8102-4d888af17897-kube-api-access-2mhr5\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.188646 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.729054 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7"] Nov 28 17:42:30 crc kubenswrapper[4710]: W1128 17:42:30.749145 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6713c8fc_ccd2_4956_8102_4d888af17897.slice/crio-f36d7be9f74014fcdc21e97e57efcd388a5dd9326685a123fc8c0d928eb98df5 WatchSource:0}: Error finding container f36d7be9f74014fcdc21e97e57efcd388a5dd9326685a123fc8c0d928eb98df5: Status 404 returned error can't find the container with id f36d7be9f74014fcdc21e97e57efcd388a5dd9326685a123fc8c0d928eb98df5 Nov 28 17:42:30 crc kubenswrapper[4710]: I1128 17:42:30.774995 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" event={"ID":"6713c8fc-ccd2-4956-8102-4d888af17897","Type":"ContainerStarted","Data":"f36d7be9f74014fcdc21e97e57efcd388a5dd9326685a123fc8c0d928eb98df5"} Nov 28 17:42:31 crc kubenswrapper[4710]: I1128 17:42:31.200276 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:42:31 crc kubenswrapper[4710]: I1128 17:42:31.785876 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" event={"ID":"6713c8fc-ccd2-4956-8102-4d888af17897","Type":"ContainerStarted","Data":"455de038a057db74e1322da6f51f5709ba200621e4893a099d26b8a04eb59639"} Nov 28 17:42:31 crc kubenswrapper[4710]: I1128 17:42:31.815938 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" podStartSLOduration=2.369417306 podStartE2EDuration="2.815913484s" podCreationTimestamp="2025-11-28 17:42:29 +0000 UTC" firstStartedPulling="2025-11-28 17:42:30.751214645 +0000 UTC m=+2640.009514690" lastFinishedPulling="2025-11-28 17:42:31.197710813 +0000 UTC m=+2640.456010868" observedRunningTime="2025-11-28 17:42:31.814962373 +0000 UTC m=+2641.073262428" watchObservedRunningTime="2025-11-28 17:42:31.815913484 +0000 UTC m=+2641.074213529" Nov 28 17:43:43 crc kubenswrapper[4710]: I1128 17:43:43.343552 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:43:43 crc kubenswrapper[4710]: I1128 17:43:43.344105 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:44:13 crc kubenswrapper[4710]: I1128 17:44:13.355561 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:44:13 crc kubenswrapper[4710]: I1128 17:44:13.357188 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:44:43 crc kubenswrapper[4710]: I1128 17:44:43.344058 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:44:43 crc kubenswrapper[4710]: I1128 17:44:43.344690 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:44:43 crc kubenswrapper[4710]: I1128 17:44:43.344779 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:44:43 crc kubenswrapper[4710]: I1128 17:44:43.345953 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eee63a2fe472ec7898194cd95ff06f894330d78d08bb63d109cdb16983d45005"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:44:43 crc kubenswrapper[4710]: I1128 17:44:43.346079 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://eee63a2fe472ec7898194cd95ff06f894330d78d08bb63d109cdb16983d45005" gracePeriod=600 Nov 28 17:44:44 crc kubenswrapper[4710]: I1128 17:44:44.316671 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="eee63a2fe472ec7898194cd95ff06f894330d78d08bb63d109cdb16983d45005" exitCode=0 Nov 28 17:44:44 crc kubenswrapper[4710]: I1128 17:44:44.316821 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"eee63a2fe472ec7898194cd95ff06f894330d78d08bb63d109cdb16983d45005"} Nov 28 17:44:44 crc kubenswrapper[4710]: I1128 17:44:44.317569 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe"} Nov 28 17:44:44 crc kubenswrapper[4710]: I1128 17:44:44.317643 4710 scope.go:117] "RemoveContainer" containerID="2bb9c85c13f4827d8637a0e3cab30a9310196524a0792524b8d571baa4666080" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.152059 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp"] Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.154641 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.158130 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.161796 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.175453 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp"] Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.300967 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a0f56e-c945-43f5-b623-63c01127f629-secret-volume\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.301158 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a0f56e-c945-43f5-b623-63c01127f629-config-volume\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.301324 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p2bp\" (UniqueName: \"kubernetes.io/projected/34a0f56e-c945-43f5-b623-63c01127f629-kube-api-access-8p2bp\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.403137 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p2bp\" (UniqueName: \"kubernetes.io/projected/34a0f56e-c945-43f5-b623-63c01127f629-kube-api-access-8p2bp\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.403218 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a0f56e-c945-43f5-b623-63c01127f629-secret-volume\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.403298 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a0f56e-c945-43f5-b623-63c01127f629-config-volume\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.404364 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a0f56e-c945-43f5-b623-63c01127f629-config-volume\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.413959 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a0f56e-c945-43f5-b623-63c01127f629-secret-volume\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.428521 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p2bp\" (UniqueName: \"kubernetes.io/projected/34a0f56e-c945-43f5-b623-63c01127f629-kube-api-access-8p2bp\") pod \"collect-profiles-29405865-vqgsp\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.481261 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:00 crc kubenswrapper[4710]: I1128 17:45:00.975870 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp"] Nov 28 17:45:00 crc kubenswrapper[4710]: W1128 17:45:00.977111 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34a0f56e_c945_43f5_b623_63c01127f629.slice/crio-3f919a61dee505a303312a4d407b19ec60a23974c75a69ba83dc7c2a49a71baf WatchSource:0}: Error finding container 3f919a61dee505a303312a4d407b19ec60a23974c75a69ba83dc7c2a49a71baf: Status 404 returned error can't find the container with id 3f919a61dee505a303312a4d407b19ec60a23974c75a69ba83dc7c2a49a71baf Nov 28 17:45:01 crc kubenswrapper[4710]: I1128 17:45:01.533922 4710 generic.go:334] "Generic (PLEG): container finished" podID="34a0f56e-c945-43f5-b623-63c01127f629" containerID="ccbe9d056bedff3a108adb4341a8bae14b835ea558e32f432cef8d79b048dfc1" exitCode=0 Nov 28 17:45:01 crc kubenswrapper[4710]: I1128 17:45:01.534083 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" event={"ID":"34a0f56e-c945-43f5-b623-63c01127f629","Type":"ContainerDied","Data":"ccbe9d056bedff3a108adb4341a8bae14b835ea558e32f432cef8d79b048dfc1"} Nov 28 17:45:01 crc kubenswrapper[4710]: I1128 17:45:01.534274 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" event={"ID":"34a0f56e-c945-43f5-b623-63c01127f629","Type":"ContainerStarted","Data":"3f919a61dee505a303312a4d407b19ec60a23974c75a69ba83dc7c2a49a71baf"} Nov 28 17:45:02 crc kubenswrapper[4710]: I1128 17:45:02.931370 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.055743 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p2bp\" (UniqueName: \"kubernetes.io/projected/34a0f56e-c945-43f5-b623-63c01127f629-kube-api-access-8p2bp\") pod \"34a0f56e-c945-43f5-b623-63c01127f629\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.056142 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a0f56e-c945-43f5-b623-63c01127f629-config-volume\") pod \"34a0f56e-c945-43f5-b623-63c01127f629\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.056341 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a0f56e-c945-43f5-b623-63c01127f629-secret-volume\") pod \"34a0f56e-c945-43f5-b623-63c01127f629\" (UID: \"34a0f56e-c945-43f5-b623-63c01127f629\") " Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.056746 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a0f56e-c945-43f5-b623-63c01127f629-config-volume" (OuterVolumeSpecName: "config-volume") pod "34a0f56e-c945-43f5-b623-63c01127f629" (UID: "34a0f56e-c945-43f5-b623-63c01127f629"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.057869 4710 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34a0f56e-c945-43f5-b623-63c01127f629-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.063338 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a0f56e-c945-43f5-b623-63c01127f629-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "34a0f56e-c945-43f5-b623-63c01127f629" (UID: "34a0f56e-c945-43f5-b623-63c01127f629"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.069552 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a0f56e-c945-43f5-b623-63c01127f629-kube-api-access-8p2bp" (OuterVolumeSpecName: "kube-api-access-8p2bp") pod "34a0f56e-c945-43f5-b623-63c01127f629" (UID: "34a0f56e-c945-43f5-b623-63c01127f629"). InnerVolumeSpecName "kube-api-access-8p2bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.160087 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p2bp\" (UniqueName: \"kubernetes.io/projected/34a0f56e-c945-43f5-b623-63c01127f629-kube-api-access-8p2bp\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.160122 4710 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/34a0f56e-c945-43f5-b623-63c01127f629-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.554483 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" event={"ID":"34a0f56e-c945-43f5-b623-63c01127f629","Type":"ContainerDied","Data":"3f919a61dee505a303312a4d407b19ec60a23974c75a69ba83dc7c2a49a71baf"} Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.554791 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f919a61dee505a303312a4d407b19ec60a23974c75a69ba83dc7c2a49a71baf" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.554533 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405865-vqgsp" Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.556011 4710 generic.go:334] "Generic (PLEG): container finished" podID="6713c8fc-ccd2-4956-8102-4d888af17897" containerID="455de038a057db74e1322da6f51f5709ba200621e4893a099d26b8a04eb59639" exitCode=0 Nov 28 17:45:03 crc kubenswrapper[4710]: I1128 17:45:03.556064 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" event={"ID":"6713c8fc-ccd2-4956-8102-4d888af17897","Type":"ContainerDied","Data":"455de038a057db74e1322da6f51f5709ba200621e4893a099d26b8a04eb59639"} Nov 28 17:45:04 crc kubenswrapper[4710]: I1128 17:45:04.047487 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv"] Nov 28 17:45:04 crc kubenswrapper[4710]: I1128 17:45:04.059990 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405820-qwzsv"] Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.009356 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.104289 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-2\") pod \"6713c8fc-ccd2-4956-8102-4d888af17897\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.104390 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-telemetry-combined-ca-bundle\") pod \"6713c8fc-ccd2-4956-8102-4d888af17897\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.104507 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mhr5\" (UniqueName: \"kubernetes.io/projected/6713c8fc-ccd2-4956-8102-4d888af17897-kube-api-access-2mhr5\") pod \"6713c8fc-ccd2-4956-8102-4d888af17897\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.104613 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-0\") pod \"6713c8fc-ccd2-4956-8102-4d888af17897\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.104687 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ssh-key\") pod \"6713c8fc-ccd2-4956-8102-4d888af17897\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.104825 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-1\") pod \"6713c8fc-ccd2-4956-8102-4d888af17897\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.104959 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-inventory\") pod \"6713c8fc-ccd2-4956-8102-4d888af17897\" (UID: \"6713c8fc-ccd2-4956-8102-4d888af17897\") " Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.117016 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "6713c8fc-ccd2-4956-8102-4d888af17897" (UID: "6713c8fc-ccd2-4956-8102-4d888af17897"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.117043 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6713c8fc-ccd2-4956-8102-4d888af17897-kube-api-access-2mhr5" (OuterVolumeSpecName: "kube-api-access-2mhr5") pod "6713c8fc-ccd2-4956-8102-4d888af17897" (UID: "6713c8fc-ccd2-4956-8102-4d888af17897"). InnerVolumeSpecName "kube-api-access-2mhr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.133993 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "6713c8fc-ccd2-4956-8102-4d888af17897" (UID: "6713c8fc-ccd2-4956-8102-4d888af17897"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.141567 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-inventory" (OuterVolumeSpecName: "inventory") pod "6713c8fc-ccd2-4956-8102-4d888af17897" (UID: "6713c8fc-ccd2-4956-8102-4d888af17897"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.143403 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "6713c8fc-ccd2-4956-8102-4d888af17897" (UID: "6713c8fc-ccd2-4956-8102-4d888af17897"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.143859 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6713c8fc-ccd2-4956-8102-4d888af17897" (UID: "6713c8fc-ccd2-4956-8102-4d888af17897"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.146569 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "6713c8fc-ccd2-4956-8102-4d888af17897" (UID: "6713c8fc-ccd2-4956-8102-4d888af17897"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.156977 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c920bc9-abe9-48c5-8124-f15727832b2e" path="/var/lib/kubelet/pods/9c920bc9-abe9-48c5-8124-f15727832b2e/volumes" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.207466 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.207500 4710 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.207513 4710 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.207527 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mhr5\" (UniqueName: \"kubernetes.io/projected/6713c8fc-ccd2-4956-8102-4d888af17897-kube-api-access-2mhr5\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.207540 4710 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.207551 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.207563 4710 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/6713c8fc-ccd2-4956-8102-4d888af17897-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.580460 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" event={"ID":"6713c8fc-ccd2-4956-8102-4d888af17897","Type":"ContainerDied","Data":"f36d7be9f74014fcdc21e97e57efcd388a5dd9326685a123fc8c0d928eb98df5"} Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.580503 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36d7be9f74014fcdc21e97e57efcd388a5dd9326685a123fc8c0d928eb98df5" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.580562 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.679286 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf"] Nov 28 17:45:05 crc kubenswrapper[4710]: E1128 17:45:05.679918 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6713c8fc-ccd2-4956-8102-4d888af17897" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.679953 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="6713c8fc-ccd2-4956-8102-4d888af17897" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 17:45:05 crc kubenswrapper[4710]: E1128 17:45:05.679972 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a0f56e-c945-43f5-b623-63c01127f629" containerName="collect-profiles" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.679978 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a0f56e-c945-43f5-b623-63c01127f629" containerName="collect-profiles" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.680188 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="6713c8fc-ccd2-4956-8102-4d888af17897" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.680202 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a0f56e-c945-43f5-b623-63c01127f629" containerName="collect-profiles" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.680902 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.682679 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.684738 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.684790 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-ntk4q" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.684799 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.688341 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.740826 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf"] Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.821265 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.821477 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.821683 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.821752 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkq77\" (UniqueName: \"kubernetes.io/projected/834f349e-2478-4abd-b6a1-0d413728889f-kube-api-access-dkq77\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.822015 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.923461 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.923539 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.923568 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkq77\" (UniqueName: \"kubernetes.io/projected/834f349e-2478-4abd-b6a1-0d413728889f-kube-api-access-dkq77\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.923632 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.923712 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.928427 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.929729 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.932006 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.944087 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-ssh-key\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:05 crc kubenswrapper[4710]: I1128 17:45:05.950282 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkq77\" (UniqueName: \"kubernetes.io/projected/834f349e-2478-4abd-b6a1-0d413728889f-kube-api-access-dkq77\") pod \"logging-edpm-deployment-openstack-edpm-ipam-xxssf\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:06 crc kubenswrapper[4710]: I1128 17:45:06.002216 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:06 crc kubenswrapper[4710]: I1128 17:45:06.693056 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf"] Nov 28 17:45:07 crc kubenswrapper[4710]: I1128 17:45:07.606270 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" event={"ID":"834f349e-2478-4abd-b6a1-0d413728889f","Type":"ContainerStarted","Data":"3ef9be7edb8dff337fd0d108a7e4c1ab15620b7c2f27c31eef532783cdc48b62"} Nov 28 17:45:08 crc kubenswrapper[4710]: I1128 17:45:08.628317 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" event={"ID":"834f349e-2478-4abd-b6a1-0d413728889f","Type":"ContainerStarted","Data":"5a321d215b6eedcc6335cd2be2274aae1503fbae5e00a0a87752f8a5a53da6a1"} Nov 28 17:45:08 crc kubenswrapper[4710]: I1128 17:45:08.654601 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" podStartSLOduration=2.746893946 podStartE2EDuration="3.654584264s" podCreationTimestamp="2025-11-28 17:45:05 +0000 UTC" firstStartedPulling="2025-11-28 17:45:06.715218834 +0000 UTC m=+2795.973518879" lastFinishedPulling="2025-11-28 17:45:07.622909152 +0000 UTC m=+2796.881209197" observedRunningTime="2025-11-28 17:45:08.645124924 +0000 UTC m=+2797.903424979" watchObservedRunningTime="2025-11-28 17:45:08.654584264 +0000 UTC m=+2797.912884309" Nov 28 17:45:20 crc kubenswrapper[4710]: I1128 17:45:20.758910 4710 generic.go:334] "Generic (PLEG): container finished" podID="834f349e-2478-4abd-b6a1-0d413728889f" containerID="5a321d215b6eedcc6335cd2be2274aae1503fbae5e00a0a87752f8a5a53da6a1" exitCode=0 Nov 28 17:45:20 crc kubenswrapper[4710]: I1128 17:45:20.758981 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" event={"ID":"834f349e-2478-4abd-b6a1-0d413728889f","Type":"ContainerDied","Data":"5a321d215b6eedcc6335cd2be2274aae1503fbae5e00a0a87752f8a5a53da6a1"} Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.301193 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.445204 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-0\") pod \"834f349e-2478-4abd-b6a1-0d413728889f\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.445248 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-ssh-key\") pod \"834f349e-2478-4abd-b6a1-0d413728889f\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.445369 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkq77\" (UniqueName: \"kubernetes.io/projected/834f349e-2478-4abd-b6a1-0d413728889f-kube-api-access-dkq77\") pod \"834f349e-2478-4abd-b6a1-0d413728889f\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.445433 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-1\") pod \"834f349e-2478-4abd-b6a1-0d413728889f\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.445551 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-inventory\") pod \"834f349e-2478-4abd-b6a1-0d413728889f\" (UID: \"834f349e-2478-4abd-b6a1-0d413728889f\") " Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.458693 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/834f349e-2478-4abd-b6a1-0d413728889f-kube-api-access-dkq77" (OuterVolumeSpecName: "kube-api-access-dkq77") pod "834f349e-2478-4abd-b6a1-0d413728889f" (UID: "834f349e-2478-4abd-b6a1-0d413728889f"). InnerVolumeSpecName "kube-api-access-dkq77". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.495926 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "834f349e-2478-4abd-b6a1-0d413728889f" (UID: "834f349e-2478-4abd-b6a1-0d413728889f"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.505984 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "834f349e-2478-4abd-b6a1-0d413728889f" (UID: "834f349e-2478-4abd-b6a1-0d413728889f"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.512809 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "834f349e-2478-4abd-b6a1-0d413728889f" (UID: "834f349e-2478-4abd-b6a1-0d413728889f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.515418 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-inventory" (OuterVolumeSpecName: "inventory") pod "834f349e-2478-4abd-b6a1-0d413728889f" (UID: "834f349e-2478-4abd-b6a1-0d413728889f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.554878 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkq77\" (UniqueName: \"kubernetes.io/projected/834f349e-2478-4abd-b6a1-0d413728889f-kube-api-access-dkq77\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.554927 4710 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.554947 4710 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-inventory\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.554963 4710 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.554978 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/834f349e-2478-4abd-b6a1-0d413728889f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.784785 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" event={"ID":"834f349e-2478-4abd-b6a1-0d413728889f","Type":"ContainerDied","Data":"3ef9be7edb8dff337fd0d108a7e4c1ab15620b7c2f27c31eef532783cdc48b62"} Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.785246 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ef9be7edb8dff337fd0d108a7e4c1ab15620b7c2f27c31eef532783cdc48b62" Nov 28 17:45:22 crc kubenswrapper[4710]: I1128 17:45:22.784969 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-xxssf" Nov 28 17:45:43 crc kubenswrapper[4710]: I1128 17:45:43.184652 4710 scope.go:117] "RemoveContainer" containerID="46151858bd429571482abdab7da8861e36883fff6031ee4929027487a96115ed" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.741063 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6m4m9"] Nov 28 17:46:29 crc kubenswrapper[4710]: E1128 17:46:29.741941 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="834f349e-2478-4abd-b6a1-0d413728889f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.741954 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="834f349e-2478-4abd-b6a1-0d413728889f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.742198 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="834f349e-2478-4abd-b6a1-0d413728889f" containerName="logging-edpm-deployment-openstack-edpm-ipam" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.743695 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.758357 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6m4m9"] Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.791912 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-utilities\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.792205 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-catalog-content\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.792297 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cpjt\" (UniqueName: \"kubernetes.io/projected/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-kube-api-access-5cpjt\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.894468 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-catalog-content\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.894542 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cpjt\" (UniqueName: \"kubernetes.io/projected/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-kube-api-access-5cpjt\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.894606 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-utilities\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.894953 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-catalog-content\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.895269 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-utilities\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:29 crc kubenswrapper[4710]: I1128 17:46:29.914064 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cpjt\" (UniqueName: \"kubernetes.io/projected/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-kube-api-access-5cpjt\") pod \"community-operators-6m4m9\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:30 crc kubenswrapper[4710]: I1128 17:46:30.063336 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:30 crc kubenswrapper[4710]: I1128 17:46:30.417798 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6m4m9"] Nov 28 17:46:30 crc kubenswrapper[4710]: I1128 17:46:30.655169 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m4m9" event={"ID":"ffd9155f-d14a-4801-8fef-f5bcc758ab9e","Type":"ContainerStarted","Data":"341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd"} Nov 28 17:46:30 crc kubenswrapper[4710]: I1128 17:46:30.655217 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m4m9" event={"ID":"ffd9155f-d14a-4801-8fef-f5bcc758ab9e","Type":"ContainerStarted","Data":"824dec3d0e692df2aeb36d70a68faebe2aae014a0dc25e841bb19f48eea8e46f"} Nov 28 17:46:31 crc kubenswrapper[4710]: I1128 17:46:31.668359 4710 generic.go:334] "Generic (PLEG): container finished" podID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerID="341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd" exitCode=0 Nov 28 17:46:31 crc kubenswrapper[4710]: I1128 17:46:31.668487 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m4m9" event={"ID":"ffd9155f-d14a-4801-8fef-f5bcc758ab9e","Type":"ContainerDied","Data":"341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd"} Nov 28 17:46:31 crc kubenswrapper[4710]: I1128 17:46:31.671230 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.116533 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4hslq"] Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.119550 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.178857 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hslq"] Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.264821 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-catalog-content\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.264944 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-utilities\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.265967 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcwl\" (UniqueName: \"kubernetes.io/projected/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-kube-api-access-dpcwl\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.367620 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcwl\" (UniqueName: \"kubernetes.io/projected/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-kube-api-access-dpcwl\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.367781 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-catalog-content\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.367808 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-utilities\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.368287 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-utilities\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.368774 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-catalog-content\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.400049 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcwl\" (UniqueName: \"kubernetes.io/projected/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-kube-api-access-dpcwl\") pod \"redhat-operators-4hslq\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.445151 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:32 crc kubenswrapper[4710]: I1128 17:46:32.912041 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hslq"] Nov 28 17:46:32 crc kubenswrapper[4710]: W1128 17:46:32.918281 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63b14b1f_ae3d_442d_a4e2_c9fa1037d848.slice/crio-93d0706b83f30ca0579f2147e1335d77d4bf73c1996c6b1c793c86bfb9a9c14f WatchSource:0}: Error finding container 93d0706b83f30ca0579f2147e1335d77d4bf73c1996c6b1c793c86bfb9a9c14f: Status 404 returned error can't find the container with id 93d0706b83f30ca0579f2147e1335d77d4bf73c1996c6b1c793c86bfb9a9c14f Nov 28 17:46:33 crc kubenswrapper[4710]: I1128 17:46:33.697617 4710 generic.go:334] "Generic (PLEG): container finished" podID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerID="5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929" exitCode=0 Nov 28 17:46:33 crc kubenswrapper[4710]: I1128 17:46:33.697935 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m4m9" event={"ID":"ffd9155f-d14a-4801-8fef-f5bcc758ab9e","Type":"ContainerDied","Data":"5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929"} Nov 28 17:46:33 crc kubenswrapper[4710]: I1128 17:46:33.702140 4710 generic.go:334] "Generic (PLEG): container finished" podID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerID="b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42" exitCode=0 Nov 28 17:46:33 crc kubenswrapper[4710]: I1128 17:46:33.702184 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hslq" event={"ID":"63b14b1f-ae3d-442d-a4e2-c9fa1037d848","Type":"ContainerDied","Data":"b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42"} Nov 28 17:46:33 crc kubenswrapper[4710]: I1128 17:46:33.702211 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hslq" event={"ID":"63b14b1f-ae3d-442d-a4e2-c9fa1037d848","Type":"ContainerStarted","Data":"93d0706b83f30ca0579f2147e1335d77d4bf73c1996c6b1c793c86bfb9a9c14f"} Nov 28 17:46:34 crc kubenswrapper[4710]: I1128 17:46:34.714452 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hslq" event={"ID":"63b14b1f-ae3d-442d-a4e2-c9fa1037d848","Type":"ContainerStarted","Data":"5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a"} Nov 28 17:46:34 crc kubenswrapper[4710]: I1128 17:46:34.719685 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m4m9" event={"ID":"ffd9155f-d14a-4801-8fef-f5bcc758ab9e","Type":"ContainerStarted","Data":"518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95"} Nov 28 17:46:34 crc kubenswrapper[4710]: I1128 17:46:34.766121 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6m4m9" podStartSLOduration=3.178807066 podStartE2EDuration="5.766104453s" podCreationTimestamp="2025-11-28 17:46:29 +0000 UTC" firstStartedPulling="2025-11-28 17:46:31.670732856 +0000 UTC m=+2880.929032951" lastFinishedPulling="2025-11-28 17:46:34.258030253 +0000 UTC m=+2883.516330338" observedRunningTime="2025-11-28 17:46:34.765634207 +0000 UTC m=+2884.023934272" watchObservedRunningTime="2025-11-28 17:46:34.766104453 +0000 UTC m=+2884.024404498" Nov 28 17:46:37 crc kubenswrapper[4710]: I1128 17:46:37.753911 4710 generic.go:334] "Generic (PLEG): container finished" podID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerID="5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a" exitCode=0 Nov 28 17:46:37 crc kubenswrapper[4710]: I1128 17:46:37.753979 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hslq" event={"ID":"63b14b1f-ae3d-442d-a4e2-c9fa1037d848","Type":"ContainerDied","Data":"5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a"} Nov 28 17:46:39 crc kubenswrapper[4710]: I1128 17:46:39.778140 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hslq" event={"ID":"63b14b1f-ae3d-442d-a4e2-c9fa1037d848","Type":"ContainerStarted","Data":"4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8"} Nov 28 17:46:39 crc kubenswrapper[4710]: I1128 17:46:39.808655 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4hslq" podStartSLOduration=2.790529416 podStartE2EDuration="7.808577043s" podCreationTimestamp="2025-11-28 17:46:32 +0000 UTC" firstStartedPulling="2025-11-28 17:46:33.704101468 +0000 UTC m=+2882.962401513" lastFinishedPulling="2025-11-28 17:46:38.722149075 +0000 UTC m=+2887.980449140" observedRunningTime="2025-11-28 17:46:39.799404842 +0000 UTC m=+2889.057704917" watchObservedRunningTime="2025-11-28 17:46:39.808577043 +0000 UTC m=+2889.066877118" Nov 28 17:46:40 crc kubenswrapper[4710]: I1128 17:46:40.063672 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:40 crc kubenswrapper[4710]: I1128 17:46:40.063748 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:40 crc kubenswrapper[4710]: I1128 17:46:40.116726 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:40 crc kubenswrapper[4710]: I1128 17:46:40.836714 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:41 crc kubenswrapper[4710]: I1128 17:46:41.533060 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6m4m9"] Nov 28 17:46:42 crc kubenswrapper[4710]: I1128 17:46:42.446217 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:42 crc kubenswrapper[4710]: I1128 17:46:42.446699 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:42 crc kubenswrapper[4710]: I1128 17:46:42.813329 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6m4m9" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerName="registry-server" containerID="cri-o://518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95" gracePeriod=2 Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.269871 4710 scope.go:117] "RemoveContainer" containerID="860a2c50f1e92c8aa4b526dd2ec8a920b5b03b2693be55537bfbc37cd3e71a21" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.282159 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.296550 4710 scope.go:117] "RemoveContainer" containerID="e68c4f48e3f3b27fe1d4744401a47d0d8cae897e5188e75c1a59007d084fe674" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.340434 4710 scope.go:117] "RemoveContainer" containerID="9d6c7d656525d2bb0bf29db35409845a0bba69ae8b2f2a82c0d814c35da498f0" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.343959 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.344034 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.434479 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cpjt\" (UniqueName: \"kubernetes.io/projected/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-kube-api-access-5cpjt\") pod \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.434614 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-utilities\") pod \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.434688 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-catalog-content\") pod \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\" (UID: \"ffd9155f-d14a-4801-8fef-f5bcc758ab9e\") " Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.436613 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-utilities" (OuterVolumeSpecName: "utilities") pod "ffd9155f-d14a-4801-8fef-f5bcc758ab9e" (UID: "ffd9155f-d14a-4801-8fef-f5bcc758ab9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.440384 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-kube-api-access-5cpjt" (OuterVolumeSpecName: "kube-api-access-5cpjt") pod "ffd9155f-d14a-4801-8fef-f5bcc758ab9e" (UID: "ffd9155f-d14a-4801-8fef-f5bcc758ab9e"). InnerVolumeSpecName "kube-api-access-5cpjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.481455 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffd9155f-d14a-4801-8fef-f5bcc758ab9e" (UID: "ffd9155f-d14a-4801-8fef-f5bcc758ab9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.514491 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hslq" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="registry-server" probeResult="failure" output=< Nov 28 17:46:43 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:46:43 crc kubenswrapper[4710]: > Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.541332 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.541376 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.541392 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cpjt\" (UniqueName: \"kubernetes.io/projected/ffd9155f-d14a-4801-8fef-f5bcc758ab9e-kube-api-access-5cpjt\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.830265 4710 generic.go:334] "Generic (PLEG): container finished" podID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerID="518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95" exitCode=0 Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.830349 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m4m9" event={"ID":"ffd9155f-d14a-4801-8fef-f5bcc758ab9e","Type":"ContainerDied","Data":"518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95"} Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.830424 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6m4m9" event={"ID":"ffd9155f-d14a-4801-8fef-f5bcc758ab9e","Type":"ContainerDied","Data":"824dec3d0e692df2aeb36d70a68faebe2aae014a0dc25e841bb19f48eea8e46f"} Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.830478 4710 scope.go:117] "RemoveContainer" containerID="518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.830356 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6m4m9" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.871528 4710 scope.go:117] "RemoveContainer" containerID="5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.903891 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6m4m9"] Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.914740 4710 scope.go:117] "RemoveContainer" containerID="341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd" Nov 28 17:46:43 crc kubenswrapper[4710]: I1128 17:46:43.916536 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6m4m9"] Nov 28 17:46:44 crc kubenswrapper[4710]: I1128 17:46:43.999972 4710 scope.go:117] "RemoveContainer" containerID="518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95" Nov 28 17:46:44 crc kubenswrapper[4710]: E1128 17:46:44.000541 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95\": container with ID starting with 518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95 not found: ID does not exist" containerID="518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95" Nov 28 17:46:44 crc kubenswrapper[4710]: I1128 17:46:44.000587 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95"} err="failed to get container status \"518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95\": rpc error: code = NotFound desc = could not find container \"518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95\": container with ID starting with 518743779b34331c6fa8ddde11e21b30224b62675e2f572af46de32458993b95 not found: ID does not exist" Nov 28 17:46:44 crc kubenswrapper[4710]: I1128 17:46:44.000617 4710 scope.go:117] "RemoveContainer" containerID="5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929" Nov 28 17:46:44 crc kubenswrapper[4710]: E1128 17:46:44.000852 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929\": container with ID starting with 5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929 not found: ID does not exist" containerID="5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929" Nov 28 17:46:44 crc kubenswrapper[4710]: I1128 17:46:44.000875 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929"} err="failed to get container status \"5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929\": rpc error: code = NotFound desc = could not find container \"5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929\": container with ID starting with 5b13506a7ebb1c3282a698e9eb20f3d13582a01eb23c25cbd566e7df854bb929 not found: ID does not exist" Nov 28 17:46:44 crc kubenswrapper[4710]: I1128 17:46:44.000893 4710 scope.go:117] "RemoveContainer" containerID="341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd" Nov 28 17:46:44 crc kubenswrapper[4710]: E1128 17:46:44.001332 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd\": container with ID starting with 341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd not found: ID does not exist" containerID="341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd" Nov 28 17:46:44 crc kubenswrapper[4710]: I1128 17:46:44.001363 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd"} err="failed to get container status \"341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd\": rpc error: code = NotFound desc = could not find container \"341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd\": container with ID starting with 341ae1237b72be5fb934c19d21aa4b49242ea2de667309714d081525b1dae2bd not found: ID does not exist" Nov 28 17:46:45 crc kubenswrapper[4710]: I1128 17:46:45.164707 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" path="/var/lib/kubelet/pods/ffd9155f-d14a-4801-8fef-f5bcc758ab9e/volumes" Nov 28 17:46:52 crc kubenswrapper[4710]: I1128 17:46:52.512725 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:52 crc kubenswrapper[4710]: I1128 17:46:52.573541 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:52 crc kubenswrapper[4710]: I1128 17:46:52.752974 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hslq"] Nov 28 17:46:53 crc kubenswrapper[4710]: I1128 17:46:53.943650 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4hslq" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="registry-server" containerID="cri-o://4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8" gracePeriod=2 Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.455230 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.620132 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-utilities\") pod \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.620556 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpcwl\" (UniqueName: \"kubernetes.io/projected/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-kube-api-access-dpcwl\") pod \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.620789 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-catalog-content\") pod \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\" (UID: \"63b14b1f-ae3d-442d-a4e2-c9fa1037d848\") " Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.621337 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-utilities" (OuterVolumeSpecName: "utilities") pod "63b14b1f-ae3d-442d-a4e2-c9fa1037d848" (UID: "63b14b1f-ae3d-442d-a4e2-c9fa1037d848"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.627076 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-kube-api-access-dpcwl" (OuterVolumeSpecName: "kube-api-access-dpcwl") pod "63b14b1f-ae3d-442d-a4e2-c9fa1037d848" (UID: "63b14b1f-ae3d-442d-a4e2-c9fa1037d848"). InnerVolumeSpecName "kube-api-access-dpcwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.730013 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.730060 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpcwl\" (UniqueName: \"kubernetes.io/projected/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-kube-api-access-dpcwl\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.732639 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63b14b1f-ae3d-442d-a4e2-c9fa1037d848" (UID: "63b14b1f-ae3d-442d-a4e2-c9fa1037d848"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.830792 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b14b1f-ae3d-442d-a4e2-c9fa1037d848-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.960404 4710 generic.go:334] "Generic (PLEG): container finished" podID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerID="4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8" exitCode=0 Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.960504 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hslq" event={"ID":"63b14b1f-ae3d-442d-a4e2-c9fa1037d848","Type":"ContainerDied","Data":"4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8"} Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.960518 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hslq" Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.961259 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hslq" event={"ID":"63b14b1f-ae3d-442d-a4e2-c9fa1037d848","Type":"ContainerDied","Data":"93d0706b83f30ca0579f2147e1335d77d4bf73c1996c6b1c793c86bfb9a9c14f"} Nov 28 17:46:54 crc kubenswrapper[4710]: I1128 17:46:54.961275 4710 scope.go:117] "RemoveContainer" containerID="4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.028047 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hslq"] Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.037834 4710 scope.go:117] "RemoveContainer" containerID="5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.041937 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4hslq"] Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.067397 4710 scope.go:117] "RemoveContainer" containerID="b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.131688 4710 scope.go:117] "RemoveContainer" containerID="4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8" Nov 28 17:46:55 crc kubenswrapper[4710]: E1128 17:46:55.134554 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8\": container with ID starting with 4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8 not found: ID does not exist" containerID="4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.134726 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8"} err="failed to get container status \"4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8\": rpc error: code = NotFound desc = could not find container \"4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8\": container with ID starting with 4f44ef7d3cfd8b953930584ae6ffc398656246f54ee4abef913cef22350941b8 not found: ID does not exist" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.134886 4710 scope.go:117] "RemoveContainer" containerID="5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a" Nov 28 17:46:55 crc kubenswrapper[4710]: E1128 17:46:55.135882 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a\": container with ID starting with 5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a not found: ID does not exist" containerID="5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.135916 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a"} err="failed to get container status \"5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a\": rpc error: code = NotFound desc = could not find container \"5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a\": container with ID starting with 5c69c11a9aab0956c0c64f276a78076898309445f34d3c91884aaddcf1543e2a not found: ID does not exist" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.135937 4710 scope.go:117] "RemoveContainer" containerID="b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42" Nov 28 17:46:55 crc kubenswrapper[4710]: E1128 17:46:55.136295 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42\": container with ID starting with b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42 not found: ID does not exist" containerID="b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.136319 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42"} err="failed to get container status \"b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42\": rpc error: code = NotFound desc = could not find container \"b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42\": container with ID starting with b1784f471d5d0044ef2c2a949e47725ba55e9b1fc3a11d2308f8adbc1ce1cd42 not found: ID does not exist" Nov 28 17:46:55 crc kubenswrapper[4710]: I1128 17:46:55.158405 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" path="/var/lib/kubelet/pods/63b14b1f-ae3d-442d-a4e2-c9fa1037d848/volumes" Nov 28 17:47:13 crc kubenswrapper[4710]: I1128 17:47:13.344186 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:47:13 crc kubenswrapper[4710]: I1128 17:47:13.344696 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:47:43 crc kubenswrapper[4710]: I1128 17:47:43.343571 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:47:43 crc kubenswrapper[4710]: I1128 17:47:43.344216 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:47:43 crc kubenswrapper[4710]: I1128 17:47:43.344272 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:47:43 crc kubenswrapper[4710]: I1128 17:47:43.345479 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:47:43 crc kubenswrapper[4710]: I1128 17:47:43.345541 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" gracePeriod=600 Nov 28 17:47:44 crc kubenswrapper[4710]: E1128 17:47:44.261515 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:47:44 crc kubenswrapper[4710]: I1128 17:47:44.579084 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" exitCode=0 Nov 28 17:47:44 crc kubenswrapper[4710]: I1128 17:47:44.579172 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe"} Nov 28 17:47:44 crc kubenswrapper[4710]: I1128 17:47:44.579453 4710 scope.go:117] "RemoveContainer" containerID="eee63a2fe472ec7898194cd95ff06f894330d78d08bb63d109cdb16983d45005" Nov 28 17:47:44 crc kubenswrapper[4710]: I1128 17:47:44.580813 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:47:44 crc kubenswrapper[4710]: E1128 17:47:44.581403 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.352743 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 17:47:55 crc kubenswrapper[4710]: E1128 17:47:55.354178 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerName="extract-utilities" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.354206 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerName="extract-utilities" Nov 28 17:47:55 crc kubenswrapper[4710]: E1128 17:47:55.354242 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerName="extract-content" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.354255 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerName="extract-content" Nov 28 17:47:55 crc kubenswrapper[4710]: E1128 17:47:55.354290 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerName="registry-server" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.354305 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerName="registry-server" Nov 28 17:47:55 crc kubenswrapper[4710]: E1128 17:47:55.354334 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="extract-content" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.354347 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="extract-content" Nov 28 17:47:55 crc kubenswrapper[4710]: E1128 17:47:55.354386 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="extract-utilities" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.354398 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="extract-utilities" Nov 28 17:47:55 crc kubenswrapper[4710]: E1128 17:47:55.354439 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="registry-server" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.354451 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="registry-server" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.354903 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b14b1f-ae3d-442d-a4e2-c9fa1037d848" containerName="registry-server" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.354932 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffd9155f-d14a-4801-8fef-f5bcc758ab9e" containerName="registry-server" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.356291 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.359828 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hbxmh" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.360442 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.360859 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.361087 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.371110 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.506965 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.507338 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.507715 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-config-data\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.507930 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.508209 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfn6z\" (UniqueName: \"kubernetes.io/projected/e6e3da65-b095-4e28-9fab-5a481096c743-kube-api-access-dfn6z\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.508271 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.508337 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.508745 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.509072 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.610776 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.610877 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-config-data\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.610916 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.610980 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfn6z\" (UniqueName: \"kubernetes.io/projected/e6e3da65-b095-4e28-9fab-5a481096c743-kube-api-access-dfn6z\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.611008 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.611036 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.611090 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.611152 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.611187 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.611586 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.611900 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.611940 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.612453 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-config-data\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.612500 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.619679 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.621205 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.623489 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.634479 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfn6z\" (UniqueName: \"kubernetes.io/projected/e6e3da65-b095-4e28-9fab-5a481096c743-kube-api-access-dfn6z\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.657489 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " pod="openstack/tempest-tests-tempest" Nov 28 17:47:55 crc kubenswrapper[4710]: I1128 17:47:55.689371 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 17:47:56 crc kubenswrapper[4710]: W1128 17:47:56.192853 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6e3da65_b095_4e28_9fab_5a481096c743.slice/crio-bff35e5dc2c53a3b4a2e2d4062b699e93588db006cb5f82677097133026e52c5 WatchSource:0}: Error finding container bff35e5dc2c53a3b4a2e2d4062b699e93588db006cb5f82677097133026e52c5: Status 404 returned error can't find the container with id bff35e5dc2c53a3b4a2e2d4062b699e93588db006cb5f82677097133026e52c5 Nov 28 17:47:56 crc kubenswrapper[4710]: I1128 17:47:56.196193 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 28 17:47:56 crc kubenswrapper[4710]: I1128 17:47:56.736878 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e6e3da65-b095-4e28-9fab-5a481096c743","Type":"ContainerStarted","Data":"bff35e5dc2c53a3b4a2e2d4062b699e93588db006cb5f82677097133026e52c5"} Nov 28 17:47:59 crc kubenswrapper[4710]: I1128 17:47:59.141806 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:47:59 crc kubenswrapper[4710]: E1128 17:47:59.142339 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:48:10 crc kubenswrapper[4710]: I1128 17:48:10.141129 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:48:10 crc kubenswrapper[4710]: E1128 17:48:10.141964 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:48:24 crc kubenswrapper[4710]: I1128 17:48:24.143862 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:48:24 crc kubenswrapper[4710]: E1128 17:48:24.145540 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:48:30 crc kubenswrapper[4710]: E1128 17:48:30.334900 4710 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 28 17:48:30 crc kubenswrapper[4710]: E1128 17:48:30.335565 4710 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dfn6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(e6e3da65-b095-4e28-9fab-5a481096c743): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 28 17:48:30 crc kubenswrapper[4710]: E1128 17:48:30.336886 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="e6e3da65-b095-4e28-9fab-5a481096c743" Nov 28 17:48:31 crc kubenswrapper[4710]: E1128 17:48:31.321693 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="e6e3da65-b095-4e28-9fab-5a481096c743" Nov 28 17:48:36 crc kubenswrapper[4710]: I1128 17:48:36.141880 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:48:36 crc kubenswrapper[4710]: E1128 17:48:36.143794 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:48:43 crc kubenswrapper[4710]: I1128 17:48:43.874132 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 28 17:48:45 crc kubenswrapper[4710]: I1128 17:48:45.502340 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e6e3da65-b095-4e28-9fab-5a481096c743","Type":"ContainerStarted","Data":"2982b6f4e59e5fc1a074eef7a4a25b576e925cdca293c02f5b2280b9af20aba1"} Nov 28 17:48:45 crc kubenswrapper[4710]: I1128 17:48:45.530003 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.855811959 podStartE2EDuration="51.529976041s" podCreationTimestamp="2025-11-28 17:47:54 +0000 UTC" firstStartedPulling="2025-11-28 17:47:56.196175054 +0000 UTC m=+2965.454475099" lastFinishedPulling="2025-11-28 17:48:43.870339136 +0000 UTC m=+3013.128639181" observedRunningTime="2025-11-28 17:48:45.527984418 +0000 UTC m=+3014.786284483" watchObservedRunningTime="2025-11-28 17:48:45.529976041 +0000 UTC m=+3014.788276086" Nov 28 17:48:50 crc kubenswrapper[4710]: I1128 17:48:50.142222 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:48:50 crc kubenswrapper[4710]: E1128 17:48:50.143661 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:49:04 crc kubenswrapper[4710]: I1128 17:49:04.143605 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:49:04 crc kubenswrapper[4710]: E1128 17:49:04.144729 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:49:17 crc kubenswrapper[4710]: I1128 17:49:17.142879 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:49:17 crc kubenswrapper[4710]: E1128 17:49:17.143722 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:49:32 crc kubenswrapper[4710]: I1128 17:49:32.142462 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:49:32 crc kubenswrapper[4710]: E1128 17:49:32.144726 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:49:46 crc kubenswrapper[4710]: I1128 17:49:46.142191 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:49:46 crc kubenswrapper[4710]: E1128 17:49:46.143937 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:49:59 crc kubenswrapper[4710]: I1128 17:49:59.141722 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:49:59 crc kubenswrapper[4710]: E1128 17:49:59.142697 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:50:11 crc kubenswrapper[4710]: I1128 17:50:11.148973 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:50:11 crc kubenswrapper[4710]: E1128 17:50:11.149738 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:50:23 crc kubenswrapper[4710]: I1128 17:50:23.141992 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:50:23 crc kubenswrapper[4710]: E1128 17:50:23.142977 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:50:34 crc kubenswrapper[4710]: I1128 17:50:34.141781 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:50:34 crc kubenswrapper[4710]: E1128 17:50:34.142633 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:50:46 crc kubenswrapper[4710]: I1128 17:50:46.142406 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:50:46 crc kubenswrapper[4710]: E1128 17:50:46.143057 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:50:58 crc kubenswrapper[4710]: I1128 17:50:58.836852 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r62xh"] Nov 28 17:50:58 crc kubenswrapper[4710]: I1128 17:50:58.840140 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:58 crc kubenswrapper[4710]: I1128 17:50:58.858638 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r62xh"] Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.000528 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-catalog-content\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.000708 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-utilities\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.000880 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96dlq\" (UniqueName: \"kubernetes.io/projected/1b9214d6-382e-4900-a183-15eddecc201a-kube-api-access-96dlq\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.103533 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-catalog-content\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.103683 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-utilities\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.103849 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96dlq\" (UniqueName: \"kubernetes.io/projected/1b9214d6-382e-4900-a183-15eddecc201a-kube-api-access-96dlq\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.104093 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-catalog-content\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.104093 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-utilities\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.135841 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96dlq\" (UniqueName: \"kubernetes.io/projected/1b9214d6-382e-4900-a183-15eddecc201a-kube-api-access-96dlq\") pod \"redhat-marketplace-r62xh\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.142408 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:50:59 crc kubenswrapper[4710]: E1128 17:50:59.147836 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.166401 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:50:59 crc kubenswrapper[4710]: I1128 17:50:59.663048 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r62xh"] Nov 28 17:51:00 crc kubenswrapper[4710]: I1128 17:51:00.451141 4710 generic.go:334] "Generic (PLEG): container finished" podID="1b9214d6-382e-4900-a183-15eddecc201a" containerID="86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd" exitCode=0 Nov 28 17:51:00 crc kubenswrapper[4710]: I1128 17:51:00.451247 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r62xh" event={"ID":"1b9214d6-382e-4900-a183-15eddecc201a","Type":"ContainerDied","Data":"86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd"} Nov 28 17:51:00 crc kubenswrapper[4710]: I1128 17:51:00.451649 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r62xh" event={"ID":"1b9214d6-382e-4900-a183-15eddecc201a","Type":"ContainerStarted","Data":"e4470c82872190eb83a9635eb9e3af65cef05262cf8579304e3808e2b1f117f2"} Nov 28 17:51:01 crc kubenswrapper[4710]: I1128 17:51:01.466674 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r62xh" event={"ID":"1b9214d6-382e-4900-a183-15eddecc201a","Type":"ContainerStarted","Data":"4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3"} Nov 28 17:51:02 crc kubenswrapper[4710]: I1128 17:51:02.480148 4710 generic.go:334] "Generic (PLEG): container finished" podID="1b9214d6-382e-4900-a183-15eddecc201a" containerID="4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3" exitCode=0 Nov 28 17:51:02 crc kubenswrapper[4710]: I1128 17:51:02.480270 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r62xh" event={"ID":"1b9214d6-382e-4900-a183-15eddecc201a","Type":"ContainerDied","Data":"4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3"} Nov 28 17:51:03 crc kubenswrapper[4710]: I1128 17:51:03.513338 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r62xh" event={"ID":"1b9214d6-382e-4900-a183-15eddecc201a","Type":"ContainerStarted","Data":"67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb"} Nov 28 17:51:03 crc kubenswrapper[4710]: I1128 17:51:03.548828 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r62xh" podStartSLOduration=3.09430472 podStartE2EDuration="5.548805553s" podCreationTimestamp="2025-11-28 17:50:58 +0000 UTC" firstStartedPulling="2025-11-28 17:51:00.453733287 +0000 UTC m=+3149.712033332" lastFinishedPulling="2025-11-28 17:51:02.90823412 +0000 UTC m=+3152.166534165" observedRunningTime="2025-11-28 17:51:03.533951712 +0000 UTC m=+3152.792251757" watchObservedRunningTime="2025-11-28 17:51:03.548805553 +0000 UTC m=+3152.807105598" Nov 28 17:51:09 crc kubenswrapper[4710]: I1128 17:51:09.167412 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:51:09 crc kubenswrapper[4710]: I1128 17:51:09.168222 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:51:09 crc kubenswrapper[4710]: I1128 17:51:09.252432 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:51:09 crc kubenswrapper[4710]: I1128 17:51:09.639880 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:51:09 crc kubenswrapper[4710]: I1128 17:51:09.704675 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r62xh"] Nov 28 17:51:11 crc kubenswrapper[4710]: I1128 17:51:11.597743 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r62xh" podUID="1b9214d6-382e-4900-a183-15eddecc201a" containerName="registry-server" containerID="cri-o://67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb" gracePeriod=2 Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.142437 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:51:12 crc kubenswrapper[4710]: E1128 17:51:12.143070 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.351572 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.418393 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96dlq\" (UniqueName: \"kubernetes.io/projected/1b9214d6-382e-4900-a183-15eddecc201a-kube-api-access-96dlq\") pod \"1b9214d6-382e-4900-a183-15eddecc201a\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.418521 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-catalog-content\") pod \"1b9214d6-382e-4900-a183-15eddecc201a\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.418685 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-utilities\") pod \"1b9214d6-382e-4900-a183-15eddecc201a\" (UID: \"1b9214d6-382e-4900-a183-15eddecc201a\") " Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.419363 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-utilities" (OuterVolumeSpecName: "utilities") pod "1b9214d6-382e-4900-a183-15eddecc201a" (UID: "1b9214d6-382e-4900-a183-15eddecc201a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.424321 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9214d6-382e-4900-a183-15eddecc201a-kube-api-access-96dlq" (OuterVolumeSpecName: "kube-api-access-96dlq") pod "1b9214d6-382e-4900-a183-15eddecc201a" (UID: "1b9214d6-382e-4900-a183-15eddecc201a"). InnerVolumeSpecName "kube-api-access-96dlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.436381 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b9214d6-382e-4900-a183-15eddecc201a" (UID: "1b9214d6-382e-4900-a183-15eddecc201a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.521855 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.521942 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b9214d6-382e-4900-a183-15eddecc201a-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.521980 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96dlq\" (UniqueName: \"kubernetes.io/projected/1b9214d6-382e-4900-a183-15eddecc201a-kube-api-access-96dlq\") on node \"crc\" DevicePath \"\"" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.611642 4710 generic.go:334] "Generic (PLEG): container finished" podID="1b9214d6-382e-4900-a183-15eddecc201a" containerID="67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb" exitCode=0 Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.611689 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r62xh" event={"ID":"1b9214d6-382e-4900-a183-15eddecc201a","Type":"ContainerDied","Data":"67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb"} Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.611736 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r62xh" event={"ID":"1b9214d6-382e-4900-a183-15eddecc201a","Type":"ContainerDied","Data":"e4470c82872190eb83a9635eb9e3af65cef05262cf8579304e3808e2b1f117f2"} Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.611773 4710 scope.go:117] "RemoveContainer" containerID="67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.611824 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r62xh" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.641827 4710 scope.go:117] "RemoveContainer" containerID="4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.667042 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r62xh"] Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.676575 4710 scope.go:117] "RemoveContainer" containerID="86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.710735 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r62xh"] Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.745390 4710 scope.go:117] "RemoveContainer" containerID="67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb" Nov 28 17:51:12 crc kubenswrapper[4710]: E1128 17:51:12.746664 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb\": container with ID starting with 67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb not found: ID does not exist" containerID="67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.746700 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb"} err="failed to get container status \"67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb\": rpc error: code = NotFound desc = could not find container \"67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb\": container with ID starting with 67a1592f187594c61b2ee1a02ca3bf73ff53cc5a0b1e5aaec46c845d03c2b7fb not found: ID does not exist" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.746727 4710 scope.go:117] "RemoveContainer" containerID="4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3" Nov 28 17:51:12 crc kubenswrapper[4710]: E1128 17:51:12.747273 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3\": container with ID starting with 4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3 not found: ID does not exist" containerID="4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.747327 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3"} err="failed to get container status \"4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3\": rpc error: code = NotFound desc = could not find container \"4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3\": container with ID starting with 4e6022894eebb6cce76d88eb3882517a89209aa1b965728244c576688d08b5b3 not found: ID does not exist" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.747355 4710 scope.go:117] "RemoveContainer" containerID="86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd" Nov 28 17:51:12 crc kubenswrapper[4710]: E1128 17:51:12.747741 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd\": container with ID starting with 86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd not found: ID does not exist" containerID="86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd" Nov 28 17:51:12 crc kubenswrapper[4710]: I1128 17:51:12.747781 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd"} err="failed to get container status \"86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd\": rpc error: code = NotFound desc = could not find container \"86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd\": container with ID starting with 86163c40e64d94dd75d47e24b360a3c333c105eb78e81429e90376124b86f0bd not found: ID does not exist" Nov 28 17:51:13 crc kubenswrapper[4710]: I1128 17:51:13.156263 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b9214d6-382e-4900-a183-15eddecc201a" path="/var/lib/kubelet/pods/1b9214d6-382e-4900-a183-15eddecc201a/volumes" Nov 28 17:51:25 crc kubenswrapper[4710]: I1128 17:51:25.146211 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:51:25 crc kubenswrapper[4710]: E1128 17:51:25.147217 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:51:38 crc kubenswrapper[4710]: I1128 17:51:38.141945 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:51:38 crc kubenswrapper[4710]: E1128 17:51:38.142828 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:51:53 crc kubenswrapper[4710]: I1128 17:51:53.142317 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:51:53 crc kubenswrapper[4710]: E1128 17:51:53.143539 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:52:04 crc kubenswrapper[4710]: I1128 17:52:04.141752 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:52:04 crc kubenswrapper[4710]: E1128 17:52:04.142633 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.007308 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-24bj9"] Nov 28 17:52:15 crc kubenswrapper[4710]: E1128 17:52:15.009024 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9214d6-382e-4900-a183-15eddecc201a" containerName="registry-server" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.009056 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9214d6-382e-4900-a183-15eddecc201a" containerName="registry-server" Nov 28 17:52:15 crc kubenswrapper[4710]: E1128 17:52:15.009098 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9214d6-382e-4900-a183-15eddecc201a" containerName="extract-content" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.009116 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9214d6-382e-4900-a183-15eddecc201a" containerName="extract-content" Nov 28 17:52:15 crc kubenswrapper[4710]: E1128 17:52:15.009156 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9214d6-382e-4900-a183-15eddecc201a" containerName="extract-utilities" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.009174 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9214d6-382e-4900-a183-15eddecc201a" containerName="extract-utilities" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.009721 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9214d6-382e-4900-a183-15eddecc201a" containerName="registry-server" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.014388 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.041842 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24bj9"] Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.142156 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:52:15 crc kubenswrapper[4710]: E1128 17:52:15.142502 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.199544 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-catalog-content\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.199709 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxzg4\" (UniqueName: \"kubernetes.io/projected/1f50f736-0617-4d41-b25f-743fe1ee0b09-kube-api-access-pxzg4\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.199872 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-utilities\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.301706 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-catalog-content\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.301796 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxzg4\" (UniqueName: \"kubernetes.io/projected/1f50f736-0617-4d41-b25f-743fe1ee0b09-kube-api-access-pxzg4\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.301876 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-utilities\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.302710 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-catalog-content\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.302788 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-utilities\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.346429 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxzg4\" (UniqueName: \"kubernetes.io/projected/1f50f736-0617-4d41-b25f-743fe1ee0b09-kube-api-access-pxzg4\") pod \"certified-operators-24bj9\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.356262 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:15 crc kubenswrapper[4710]: I1128 17:52:15.859464 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24bj9"] Nov 28 17:52:16 crc kubenswrapper[4710]: I1128 17:52:16.339573 4710 generic.go:334] "Generic (PLEG): container finished" podID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerID="494e376b1ce6de2aa4d5c10c76d646658da6cd5922324be72dfb5e25dddc76d7" exitCode=0 Nov 28 17:52:16 crc kubenswrapper[4710]: I1128 17:52:16.339926 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24bj9" event={"ID":"1f50f736-0617-4d41-b25f-743fe1ee0b09","Type":"ContainerDied","Data":"494e376b1ce6de2aa4d5c10c76d646658da6cd5922324be72dfb5e25dddc76d7"} Nov 28 17:52:16 crc kubenswrapper[4710]: I1128 17:52:16.339988 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24bj9" event={"ID":"1f50f736-0617-4d41-b25f-743fe1ee0b09","Type":"ContainerStarted","Data":"00a592afe03377e4006211e783a987ef64c0b280f186894082fab1255ca54ee4"} Nov 28 17:52:16 crc kubenswrapper[4710]: I1128 17:52:16.350002 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:52:17 crc kubenswrapper[4710]: I1128 17:52:17.351869 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24bj9" event={"ID":"1f50f736-0617-4d41-b25f-743fe1ee0b09","Type":"ContainerStarted","Data":"9dea4375d61851507c602d52f0a5f82083941ee86a8dfc1ff12fcd74b9bd9575"} Nov 28 17:52:18 crc kubenswrapper[4710]: I1128 17:52:18.370816 4710 generic.go:334] "Generic (PLEG): container finished" podID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerID="9dea4375d61851507c602d52f0a5f82083941ee86a8dfc1ff12fcd74b9bd9575" exitCode=0 Nov 28 17:52:18 crc kubenswrapper[4710]: I1128 17:52:18.370909 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24bj9" event={"ID":"1f50f736-0617-4d41-b25f-743fe1ee0b09","Type":"ContainerDied","Data":"9dea4375d61851507c602d52f0a5f82083941ee86a8dfc1ff12fcd74b9bd9575"} Nov 28 17:52:19 crc kubenswrapper[4710]: I1128 17:52:19.385497 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24bj9" event={"ID":"1f50f736-0617-4d41-b25f-743fe1ee0b09","Type":"ContainerStarted","Data":"56bf575a4ab4062d5cc5df3a00fa491a10455dff13a055a6543f57a2bd737c46"} Nov 28 17:52:19 crc kubenswrapper[4710]: I1128 17:52:19.405983 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-24bj9" podStartSLOduration=2.936389082 podStartE2EDuration="5.405966094s" podCreationTimestamp="2025-11-28 17:52:14 +0000 UTC" firstStartedPulling="2025-11-28 17:52:16.349575655 +0000 UTC m=+3225.607875720" lastFinishedPulling="2025-11-28 17:52:18.819152677 +0000 UTC m=+3228.077452732" observedRunningTime="2025-11-28 17:52:19.402919777 +0000 UTC m=+3228.661219862" watchObservedRunningTime="2025-11-28 17:52:19.405966094 +0000 UTC m=+3228.664266139" Nov 28 17:52:25 crc kubenswrapper[4710]: I1128 17:52:25.357488 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:25 crc kubenswrapper[4710]: I1128 17:52:25.358194 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:25 crc kubenswrapper[4710]: I1128 17:52:25.429989 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:25 crc kubenswrapper[4710]: I1128 17:52:25.626647 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:28 crc kubenswrapper[4710]: I1128 17:52:28.141623 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:52:28 crc kubenswrapper[4710]: E1128 17:52:28.142358 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:52:28 crc kubenswrapper[4710]: I1128 17:52:28.975553 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24bj9"] Nov 28 17:52:28 crc kubenswrapper[4710]: I1128 17:52:28.976077 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-24bj9" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerName="registry-server" containerID="cri-o://56bf575a4ab4062d5cc5df3a00fa491a10455dff13a055a6543f57a2bd737c46" gracePeriod=2 Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.513382 4710 generic.go:334] "Generic (PLEG): container finished" podID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerID="56bf575a4ab4062d5cc5df3a00fa491a10455dff13a055a6543f57a2bd737c46" exitCode=0 Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.513553 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24bj9" event={"ID":"1f50f736-0617-4d41-b25f-743fe1ee0b09","Type":"ContainerDied","Data":"56bf575a4ab4062d5cc5df3a00fa491a10455dff13a055a6543f57a2bd737c46"} Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.814506 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.922490 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-utilities\") pod \"1f50f736-0617-4d41-b25f-743fe1ee0b09\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.922865 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-catalog-content\") pod \"1f50f736-0617-4d41-b25f-743fe1ee0b09\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.923276 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxzg4\" (UniqueName: \"kubernetes.io/projected/1f50f736-0617-4d41-b25f-743fe1ee0b09-kube-api-access-pxzg4\") pod \"1f50f736-0617-4d41-b25f-743fe1ee0b09\" (UID: \"1f50f736-0617-4d41-b25f-743fe1ee0b09\") " Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.923585 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-utilities" (OuterVolumeSpecName: "utilities") pod "1f50f736-0617-4d41-b25f-743fe1ee0b09" (UID: "1f50f736-0617-4d41-b25f-743fe1ee0b09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.924203 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.931722 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f50f736-0617-4d41-b25f-743fe1ee0b09-kube-api-access-pxzg4" (OuterVolumeSpecName: "kube-api-access-pxzg4") pod "1f50f736-0617-4d41-b25f-743fe1ee0b09" (UID: "1f50f736-0617-4d41-b25f-743fe1ee0b09"). InnerVolumeSpecName "kube-api-access-pxzg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:52:29 crc kubenswrapper[4710]: I1128 17:52:29.991815 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f50f736-0617-4d41-b25f-743fe1ee0b09" (UID: "1f50f736-0617-4d41-b25f-743fe1ee0b09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.026524 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f50f736-0617-4d41-b25f-743fe1ee0b09-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.026576 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxzg4\" (UniqueName: \"kubernetes.io/projected/1f50f736-0617-4d41-b25f-743fe1ee0b09-kube-api-access-pxzg4\") on node \"crc\" DevicePath \"\"" Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.530178 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24bj9" event={"ID":"1f50f736-0617-4d41-b25f-743fe1ee0b09","Type":"ContainerDied","Data":"00a592afe03377e4006211e783a987ef64c0b280f186894082fab1255ca54ee4"} Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.530260 4710 scope.go:117] "RemoveContainer" containerID="56bf575a4ab4062d5cc5df3a00fa491a10455dff13a055a6543f57a2bd737c46" Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.530601 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24bj9" Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.584592 4710 scope.go:117] "RemoveContainer" containerID="9dea4375d61851507c602d52f0a5f82083941ee86a8dfc1ff12fcd74b9bd9575" Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.588293 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24bj9"] Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.600248 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-24bj9"] Nov 28 17:52:30 crc kubenswrapper[4710]: I1128 17:52:30.611125 4710 scope.go:117] "RemoveContainer" containerID="494e376b1ce6de2aa4d5c10c76d646658da6cd5922324be72dfb5e25dddc76d7" Nov 28 17:52:31 crc kubenswrapper[4710]: I1128 17:52:31.173128 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" path="/var/lib/kubelet/pods/1f50f736-0617-4d41-b25f-743fe1ee0b09/volumes" Nov 28 17:52:42 crc kubenswrapper[4710]: I1128 17:52:42.143240 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:52:42 crc kubenswrapper[4710]: E1128 17:52:42.144262 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:52:54 crc kubenswrapper[4710]: I1128 17:52:54.142540 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:52:54 crc kubenswrapper[4710]: I1128 17:52:54.849499 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"9faa162fa2d0e90421242c87e8957b2d01034457183612706fea687b37d5e765"} Nov 28 17:53:21 crc kubenswrapper[4710]: I1128 17:53:21.207116 4710 generic.go:334] "Generic (PLEG): container finished" podID="e6e3da65-b095-4e28-9fab-5a481096c743" containerID="2982b6f4e59e5fc1a074eef7a4a25b576e925cdca293c02f5b2280b9af20aba1" exitCode=0 Nov 28 17:53:21 crc kubenswrapper[4710]: I1128 17:53:21.207180 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e6e3da65-b095-4e28-9fab-5a481096c743","Type":"ContainerDied","Data":"2982b6f4e59e5fc1a074eef7a4a25b576e925cdca293c02f5b2280b9af20aba1"} Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.660857 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.791903 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-temporary\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792141 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config-secret\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792263 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792369 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfn6z\" (UniqueName: \"kubernetes.io/projected/e6e3da65-b095-4e28-9fab-5a481096c743-kube-api-access-dfn6z\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792447 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ca-certs\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792549 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-workdir\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792749 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-config-data\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792887 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ssh-key\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792990 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"e6e3da65-b095-4e28-9fab-5a481096c743\" (UID: \"e6e3da65-b095-4e28-9fab-5a481096c743\") " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.792783 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.793628 4710 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.794035 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-config-data" (OuterVolumeSpecName: "config-data") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.799108 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.799533 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.800081 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e3da65-b095-4e28-9fab-5a481096c743-kube-api-access-dfn6z" (OuterVolumeSpecName: "kube-api-access-dfn6z") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "kube-api-access-dfn6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.832774 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.842074 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.854152 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.888256 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e6e3da65-b095-4e28-9fab-5a481096c743" (UID: "e6e3da65-b095-4e28-9fab-5a481096c743"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.896309 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfn6z\" (UniqueName: \"kubernetes.io/projected/e6e3da65-b095-4e28-9fab-5a481096c743-kube-api-access-dfn6z\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.896371 4710 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.896401 4710 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e6e3da65-b095-4e28-9fab-5a481096c743-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.896431 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.896461 4710 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.896544 4710 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.896572 4710 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.896593 4710 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e6e3da65-b095-4e28-9fab-5a481096c743-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.928542 4710 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Nov 28 17:53:22 crc kubenswrapper[4710]: I1128 17:53:22.998929 4710 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Nov 28 17:53:23 crc kubenswrapper[4710]: I1128 17:53:23.230245 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"e6e3da65-b095-4e28-9fab-5a481096c743","Type":"ContainerDied","Data":"bff35e5dc2c53a3b4a2e2d4062b699e93588db006cb5f82677097133026e52c5"} Nov 28 17:53:23 crc kubenswrapper[4710]: I1128 17:53:23.230342 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 28 17:53:23 crc kubenswrapper[4710]: I1128 17:53:23.230347 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bff35e5dc2c53a3b4a2e2d4062b699e93588db006cb5f82677097133026e52c5" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.135078 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 17:53:31 crc kubenswrapper[4710]: E1128 17:53:31.136043 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e3da65-b095-4e28-9fab-5a481096c743" containerName="tempest-tests-tempest-tests-runner" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.136063 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e3da65-b095-4e28-9fab-5a481096c743" containerName="tempest-tests-tempest-tests-runner" Nov 28 17:53:31 crc kubenswrapper[4710]: E1128 17:53:31.136080 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerName="extract-content" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.136089 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerName="extract-content" Nov 28 17:53:31 crc kubenswrapper[4710]: E1128 17:53:31.136134 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerName="extract-utilities" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.136143 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerName="extract-utilities" Nov 28 17:53:31 crc kubenswrapper[4710]: E1128 17:53:31.136161 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerName="registry-server" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.136169 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerName="registry-server" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.136497 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f50f736-0617-4d41-b25f-743fe1ee0b09" containerName="registry-server" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.136551 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e3da65-b095-4e28-9fab-5a481096c743" containerName="tempest-tests-tempest-tests-runner" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.137812 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.140968 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hbxmh" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.188432 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.276468 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"218e86f8-62d5-49f2-83fd-9f63432aef22\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.276709 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4g28\" (UniqueName: \"kubernetes.io/projected/218e86f8-62d5-49f2-83fd-9f63432aef22-kube-api-access-s4g28\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"218e86f8-62d5-49f2-83fd-9f63432aef22\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.380106 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4g28\" (UniqueName: \"kubernetes.io/projected/218e86f8-62d5-49f2-83fd-9f63432aef22-kube-api-access-s4g28\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"218e86f8-62d5-49f2-83fd-9f63432aef22\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.380724 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"218e86f8-62d5-49f2-83fd-9f63432aef22\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.381270 4710 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"218e86f8-62d5-49f2-83fd-9f63432aef22\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.424718 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4g28\" (UniqueName: \"kubernetes.io/projected/218e86f8-62d5-49f2-83fd-9f63432aef22-kube-api-access-s4g28\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"218e86f8-62d5-49f2-83fd-9f63432aef22\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.433204 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"218e86f8-62d5-49f2-83fd-9f63432aef22\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.472792 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 28 17:53:31 crc kubenswrapper[4710]: I1128 17:53:31.989182 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 28 17:53:32 crc kubenswrapper[4710]: I1128 17:53:32.342851 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"218e86f8-62d5-49f2-83fd-9f63432aef22","Type":"ContainerStarted","Data":"415d17d3fe989d229481a3d885d5c49b88ed6b0457e5227c429cde545c371983"} Nov 28 17:53:34 crc kubenswrapper[4710]: I1128 17:53:34.376574 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"218e86f8-62d5-49f2-83fd-9f63432aef22","Type":"ContainerStarted","Data":"44d3a2fd8d2b1e0f8175b6c9c2969fa04de692e72b19cf55f2cb608a02c6f2d2"} Nov 28 17:53:34 crc kubenswrapper[4710]: I1128 17:53:34.419741 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.261145134 podStartE2EDuration="3.419709966s" podCreationTimestamp="2025-11-28 17:53:31 +0000 UTC" firstStartedPulling="2025-11-28 17:53:32.000555067 +0000 UTC m=+3301.258855172" lastFinishedPulling="2025-11-28 17:53:33.159119929 +0000 UTC m=+3302.417420004" observedRunningTime="2025-11-28 17:53:34.41637275 +0000 UTC m=+3303.674672835" watchObservedRunningTime="2025-11-28 17:53:34.419709966 +0000 UTC m=+3303.678010041" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.271433 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-22qbx/must-gather-bpq96"] Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.274478 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.277611 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-22qbx"/"openshift-service-ca.crt" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.277707 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-22qbx"/"kube-root-ca.crt" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.277796 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-22qbx"/"default-dockercfg-2nw7g" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.300549 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-22qbx/must-gather-bpq96"] Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.358042 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfjj\" (UniqueName: \"kubernetes.io/projected/10395304-0e2c-4cb0-bfd0-7a850ac729ef-kube-api-access-xnfjj\") pod \"must-gather-bpq96\" (UID: \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\") " pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.358135 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10395304-0e2c-4cb0-bfd0-7a850ac729ef-must-gather-output\") pod \"must-gather-bpq96\" (UID: \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\") " pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.460955 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfjj\" (UniqueName: \"kubernetes.io/projected/10395304-0e2c-4cb0-bfd0-7a850ac729ef-kube-api-access-xnfjj\") pod \"must-gather-bpq96\" (UID: \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\") " pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.461093 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10395304-0e2c-4cb0-bfd0-7a850ac729ef-must-gather-output\") pod \"must-gather-bpq96\" (UID: \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\") " pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.461601 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10395304-0e2c-4cb0-bfd0-7a850ac729ef-must-gather-output\") pod \"must-gather-bpq96\" (UID: \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\") " pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.478739 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfjj\" (UniqueName: \"kubernetes.io/projected/10395304-0e2c-4cb0-bfd0-7a850ac729ef-kube-api-access-xnfjj\") pod \"must-gather-bpq96\" (UID: \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\") " pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 17:54:18 crc kubenswrapper[4710]: I1128 17:54:18.597608 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 17:54:19 crc kubenswrapper[4710]: I1128 17:54:19.074170 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-22qbx/must-gather-bpq96"] Nov 28 17:54:19 crc kubenswrapper[4710]: I1128 17:54:19.941010 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/must-gather-bpq96" event={"ID":"10395304-0e2c-4cb0-bfd0-7a850ac729ef","Type":"ContainerStarted","Data":"8d257cd7b2d190c0364d4bc5b53fa8d28b44f5ed17bd68d37e0a587f89f4e5d8"} Nov 28 17:54:24 crc kubenswrapper[4710]: I1128 17:54:24.996228 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/must-gather-bpq96" event={"ID":"10395304-0e2c-4cb0-bfd0-7a850ac729ef","Type":"ContainerStarted","Data":"e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f"} Nov 28 17:54:24 crc kubenswrapper[4710]: I1128 17:54:24.996816 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/must-gather-bpq96" event={"ID":"10395304-0e2c-4cb0-bfd0-7a850ac729ef","Type":"ContainerStarted","Data":"a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919"} Nov 28 17:54:27 crc kubenswrapper[4710]: I1128 17:54:27.862289 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-22qbx/must-gather-bpq96" podStartSLOduration=5.214978054 podStartE2EDuration="9.862271875s" podCreationTimestamp="2025-11-28 17:54:18 +0000 UTC" firstStartedPulling="2025-11-28 17:54:19.082417253 +0000 UTC m=+3348.340717318" lastFinishedPulling="2025-11-28 17:54:23.729711094 +0000 UTC m=+3352.988011139" observedRunningTime="2025-11-28 17:54:25.031568339 +0000 UTC m=+3354.289868384" watchObservedRunningTime="2025-11-28 17:54:27.862271875 +0000 UTC m=+3357.120571910" Nov 28 17:54:27 crc kubenswrapper[4710]: I1128 17:54:27.869047 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-22qbx/crc-debug-6jlb5"] Nov 28 17:54:27 crc kubenswrapper[4710]: I1128 17:54:27.870553 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:54:27 crc kubenswrapper[4710]: I1128 17:54:27.976008 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxx8p\" (UniqueName: \"kubernetes.io/projected/dd304bf8-da31-4a4f-9901-326bc8e59996-kube-api-access-gxx8p\") pod \"crc-debug-6jlb5\" (UID: \"dd304bf8-da31-4a4f-9901-326bc8e59996\") " pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:54:27 crc kubenswrapper[4710]: I1128 17:54:27.976360 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd304bf8-da31-4a4f-9901-326bc8e59996-host\") pod \"crc-debug-6jlb5\" (UID: \"dd304bf8-da31-4a4f-9901-326bc8e59996\") " pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:54:28 crc kubenswrapper[4710]: I1128 17:54:28.078259 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd304bf8-da31-4a4f-9901-326bc8e59996-host\") pod \"crc-debug-6jlb5\" (UID: \"dd304bf8-da31-4a4f-9901-326bc8e59996\") " pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:54:28 crc kubenswrapper[4710]: I1128 17:54:28.078356 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxx8p\" (UniqueName: \"kubernetes.io/projected/dd304bf8-da31-4a4f-9901-326bc8e59996-kube-api-access-gxx8p\") pod \"crc-debug-6jlb5\" (UID: \"dd304bf8-da31-4a4f-9901-326bc8e59996\") " pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:54:28 crc kubenswrapper[4710]: I1128 17:54:28.078408 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd304bf8-da31-4a4f-9901-326bc8e59996-host\") pod \"crc-debug-6jlb5\" (UID: \"dd304bf8-da31-4a4f-9901-326bc8e59996\") " pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:54:28 crc kubenswrapper[4710]: I1128 17:54:28.095305 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxx8p\" (UniqueName: \"kubernetes.io/projected/dd304bf8-da31-4a4f-9901-326bc8e59996-kube-api-access-gxx8p\") pod \"crc-debug-6jlb5\" (UID: \"dd304bf8-da31-4a4f-9901-326bc8e59996\") " pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:54:28 crc kubenswrapper[4710]: I1128 17:54:28.190098 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:54:28 crc kubenswrapper[4710]: W1128 17:54:28.232292 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd304bf8_da31_4a4f_9901_326bc8e59996.slice/crio-1a0b8059ae436e66d0c5871f9c3ced6607f0a1b04556ed88c4c1cafa6458baf1 WatchSource:0}: Error finding container 1a0b8059ae436e66d0c5871f9c3ced6607f0a1b04556ed88c4c1cafa6458baf1: Status 404 returned error can't find the container with id 1a0b8059ae436e66d0c5871f9c3ced6607f0a1b04556ed88c4c1cafa6458baf1 Nov 28 17:54:29 crc kubenswrapper[4710]: I1128 17:54:29.035578 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/crc-debug-6jlb5" event={"ID":"dd304bf8-da31-4a4f-9901-326bc8e59996","Type":"ContainerStarted","Data":"1a0b8059ae436e66d0c5871f9c3ced6607f0a1b04556ed88c4c1cafa6458baf1"} Nov 28 17:54:40 crc kubenswrapper[4710]: I1128 17:54:40.175344 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/crc-debug-6jlb5" event={"ID":"dd304bf8-da31-4a4f-9901-326bc8e59996","Type":"ContainerStarted","Data":"7e50b18ae0017b57b98a4e408ea6d5232230e45f2f30641be04326de16031872"} Nov 28 17:55:13 crc kubenswrapper[4710]: I1128 17:55:13.343842 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:55:13 crc kubenswrapper[4710]: I1128 17:55:13.344373 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:55:18 crc kubenswrapper[4710]: I1128 17:55:18.596150 4710 generic.go:334] "Generic (PLEG): container finished" podID="dd304bf8-da31-4a4f-9901-326bc8e59996" containerID="7e50b18ae0017b57b98a4e408ea6d5232230e45f2f30641be04326de16031872" exitCode=0 Nov 28 17:55:18 crc kubenswrapper[4710]: I1128 17:55:18.596213 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/crc-debug-6jlb5" event={"ID":"dd304bf8-da31-4a4f-9901-326bc8e59996","Type":"ContainerDied","Data":"7e50b18ae0017b57b98a4e408ea6d5232230e45f2f30641be04326de16031872"} Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.731882 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.771373 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-22qbx/crc-debug-6jlb5"] Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.791374 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-22qbx/crc-debug-6jlb5"] Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.815078 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd304bf8-da31-4a4f-9901-326bc8e59996-host\") pod \"dd304bf8-da31-4a4f-9901-326bc8e59996\" (UID: \"dd304bf8-da31-4a4f-9901-326bc8e59996\") " Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.815164 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxx8p\" (UniqueName: \"kubernetes.io/projected/dd304bf8-da31-4a4f-9901-326bc8e59996-kube-api-access-gxx8p\") pod \"dd304bf8-da31-4a4f-9901-326bc8e59996\" (UID: \"dd304bf8-da31-4a4f-9901-326bc8e59996\") " Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.815233 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd304bf8-da31-4a4f-9901-326bc8e59996-host" (OuterVolumeSpecName: "host") pod "dd304bf8-da31-4a4f-9901-326bc8e59996" (UID: "dd304bf8-da31-4a4f-9901-326bc8e59996"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.815865 4710 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd304bf8-da31-4a4f-9901-326bc8e59996-host\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.823640 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd304bf8-da31-4a4f-9901-326bc8e59996-kube-api-access-gxx8p" (OuterVolumeSpecName: "kube-api-access-gxx8p") pod "dd304bf8-da31-4a4f-9901-326bc8e59996" (UID: "dd304bf8-da31-4a4f-9901-326bc8e59996"). InnerVolumeSpecName "kube-api-access-gxx8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:55:19 crc kubenswrapper[4710]: I1128 17:55:19.917878 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxx8p\" (UniqueName: \"kubernetes.io/projected/dd304bf8-da31-4a4f-9901-326bc8e59996-kube-api-access-gxx8p\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:20 crc kubenswrapper[4710]: I1128 17:55:20.624517 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a0b8059ae436e66d0c5871f9c3ced6607f0a1b04556ed88c4c1cafa6458baf1" Nov 28 17:55:20 crc kubenswrapper[4710]: I1128 17:55:20.624617 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-6jlb5" Nov 28 17:55:20 crc kubenswrapper[4710]: I1128 17:55:20.976624 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-22qbx/crc-debug-dqkk4"] Nov 28 17:55:20 crc kubenswrapper[4710]: E1128 17:55:20.977177 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd304bf8-da31-4a4f-9901-326bc8e59996" containerName="container-00" Nov 28 17:55:20 crc kubenswrapper[4710]: I1128 17:55:20.977194 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd304bf8-da31-4a4f-9901-326bc8e59996" containerName="container-00" Nov 28 17:55:20 crc kubenswrapper[4710]: I1128 17:55:20.977608 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd304bf8-da31-4a4f-9901-326bc8e59996" containerName="container-00" Nov 28 17:55:20 crc kubenswrapper[4710]: I1128 17:55:20.978500 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.146969 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq492\" (UniqueName: \"kubernetes.io/projected/3f208941-5fa3-4b8d-baa6-0f1c97440122-kube-api-access-fq492\") pod \"crc-debug-dqkk4\" (UID: \"3f208941-5fa3-4b8d-baa6-0f1c97440122\") " pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.147093 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f208941-5fa3-4b8d-baa6-0f1c97440122-host\") pod \"crc-debug-dqkk4\" (UID: \"3f208941-5fa3-4b8d-baa6-0f1c97440122\") " pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.154245 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd304bf8-da31-4a4f-9901-326bc8e59996" path="/var/lib/kubelet/pods/dd304bf8-da31-4a4f-9901-326bc8e59996/volumes" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.249358 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f208941-5fa3-4b8d-baa6-0f1c97440122-host\") pod \"crc-debug-dqkk4\" (UID: \"3f208941-5fa3-4b8d-baa6-0f1c97440122\") " pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.249601 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f208941-5fa3-4b8d-baa6-0f1c97440122-host\") pod \"crc-debug-dqkk4\" (UID: \"3f208941-5fa3-4b8d-baa6-0f1c97440122\") " pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.250072 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fq492\" (UniqueName: \"kubernetes.io/projected/3f208941-5fa3-4b8d-baa6-0f1c97440122-kube-api-access-fq492\") pod \"crc-debug-dqkk4\" (UID: \"3f208941-5fa3-4b8d-baa6-0f1c97440122\") " pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.274706 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fq492\" (UniqueName: \"kubernetes.io/projected/3f208941-5fa3-4b8d-baa6-0f1c97440122-kube-api-access-fq492\") pod \"crc-debug-dqkk4\" (UID: \"3f208941-5fa3-4b8d-baa6-0f1c97440122\") " pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.297019 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.634085 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/crc-debug-dqkk4" event={"ID":"3f208941-5fa3-4b8d-baa6-0f1c97440122","Type":"ContainerStarted","Data":"375931c32e11f978641e7b2fcb4009eda60e2ab095b565ecf7abac3cd2660f76"} Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.634393 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/crc-debug-dqkk4" event={"ID":"3f208941-5fa3-4b8d-baa6-0f1c97440122","Type":"ContainerStarted","Data":"64eeac64deaf312ad7442355df6d76f2b26f1cf5311f524795d1647a147dbdda"} Nov 28 17:55:21 crc kubenswrapper[4710]: I1128 17:55:21.656670 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-22qbx/crc-debug-dqkk4" podStartSLOduration=1.656635409 podStartE2EDuration="1.656635409s" podCreationTimestamp="2025-11-28 17:55:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 17:55:21.644592186 +0000 UTC m=+3410.902892231" watchObservedRunningTime="2025-11-28 17:55:21.656635409 +0000 UTC m=+3410.914935454" Nov 28 17:55:22 crc kubenswrapper[4710]: I1128 17:55:22.649138 4710 generic.go:334] "Generic (PLEG): container finished" podID="3f208941-5fa3-4b8d-baa6-0f1c97440122" containerID="375931c32e11f978641e7b2fcb4009eda60e2ab095b565ecf7abac3cd2660f76" exitCode=0 Nov 28 17:55:22 crc kubenswrapper[4710]: I1128 17:55:22.649184 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/crc-debug-dqkk4" event={"ID":"3f208941-5fa3-4b8d-baa6-0f1c97440122","Type":"ContainerDied","Data":"375931c32e11f978641e7b2fcb4009eda60e2ab095b565ecf7abac3cd2660f76"} Nov 28 17:55:23 crc kubenswrapper[4710]: I1128 17:55:23.826252 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:23 crc kubenswrapper[4710]: I1128 17:55:23.872188 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-22qbx/crc-debug-dqkk4"] Nov 28 17:55:23 crc kubenswrapper[4710]: I1128 17:55:23.884246 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-22qbx/crc-debug-dqkk4"] Nov 28 17:55:23 crc kubenswrapper[4710]: I1128 17:55:23.919232 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f208941-5fa3-4b8d-baa6-0f1c97440122-host\") pod \"3f208941-5fa3-4b8d-baa6-0f1c97440122\" (UID: \"3f208941-5fa3-4b8d-baa6-0f1c97440122\") " Nov 28 17:55:23 crc kubenswrapper[4710]: I1128 17:55:23.919390 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f208941-5fa3-4b8d-baa6-0f1c97440122-host" (OuterVolumeSpecName: "host") pod "3f208941-5fa3-4b8d-baa6-0f1c97440122" (UID: "3f208941-5fa3-4b8d-baa6-0f1c97440122"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:55:23 crc kubenswrapper[4710]: I1128 17:55:23.919489 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fq492\" (UniqueName: \"kubernetes.io/projected/3f208941-5fa3-4b8d-baa6-0f1c97440122-kube-api-access-fq492\") pod \"3f208941-5fa3-4b8d-baa6-0f1c97440122\" (UID: \"3f208941-5fa3-4b8d-baa6-0f1c97440122\") " Nov 28 17:55:23 crc kubenswrapper[4710]: I1128 17:55:23.920159 4710 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f208941-5fa3-4b8d-baa6-0f1c97440122-host\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:24 crc kubenswrapper[4710]: I1128 17:55:24.684321 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64eeac64deaf312ad7442355df6d76f2b26f1cf5311f524795d1647a147dbdda" Nov 28 17:55:24 crc kubenswrapper[4710]: I1128 17:55:24.684447 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-dqkk4" Nov 28 17:55:24 crc kubenswrapper[4710]: I1128 17:55:24.995040 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" podUID="a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:55:24 crc kubenswrapper[4710]: I1128 17:55:24.995147 4710 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-tkzbw" podUID="a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 28 17:55:25 crc kubenswrapper[4710]: I1128 17:55:25.265142 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f208941-5fa3-4b8d-baa6-0f1c97440122-kube-api-access-fq492" (OuterVolumeSpecName: "kube-api-access-fq492") pod "3f208941-5fa3-4b8d-baa6-0f1c97440122" (UID: "3f208941-5fa3-4b8d-baa6-0f1c97440122"). InnerVolumeSpecName "kube-api-access-fq492". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:55:25 crc kubenswrapper[4710]: I1128 17:55:25.354397 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fq492\" (UniqueName: \"kubernetes.io/projected/3f208941-5fa3-4b8d-baa6-0f1c97440122-kube-api-access-fq492\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.551867 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-22qbx/crc-debug-nphdg"] Nov 28 17:55:26 crc kubenswrapper[4710]: E1128 17:55:26.553946 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f208941-5fa3-4b8d-baa6-0f1c97440122" containerName="container-00" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.553992 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f208941-5fa3-4b8d-baa6-0f1c97440122" containerName="container-00" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.554480 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f208941-5fa3-4b8d-baa6-0f1c97440122" containerName="container-00" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.556184 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.684799 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkpk4\" (UniqueName: \"kubernetes.io/projected/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-kube-api-access-dkpk4\") pod \"crc-debug-nphdg\" (UID: \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\") " pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.684853 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-host\") pod \"crc-debug-nphdg\" (UID: \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\") " pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.788276 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkpk4\" (UniqueName: \"kubernetes.io/projected/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-kube-api-access-dkpk4\") pod \"crc-debug-nphdg\" (UID: \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\") " pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.788373 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-host\") pod \"crc-debug-nphdg\" (UID: \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\") " pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.788825 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-host\") pod \"crc-debug-nphdg\" (UID: \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\") " pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.827553 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkpk4\" (UniqueName: \"kubernetes.io/projected/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-kube-api-access-dkpk4\") pod \"crc-debug-nphdg\" (UID: \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\") " pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:26 crc kubenswrapper[4710]: I1128 17:55:26.882609 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:26 crc kubenswrapper[4710]: W1128 17:55:26.924083 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6943d2_1c43_4d5b_b1e2_d0aeb1d54eb9.slice/crio-a34d47704e232b94c0ce3488693fba198477bd802362c5a4d84d16d433a38594 WatchSource:0}: Error finding container a34d47704e232b94c0ce3488693fba198477bd802362c5a4d84d16d433a38594: Status 404 returned error can't find the container with id a34d47704e232b94c0ce3488693fba198477bd802362c5a4d84d16d433a38594 Nov 28 17:55:27 crc kubenswrapper[4710]: I1128 17:55:27.158303 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f208941-5fa3-4b8d-baa6-0f1c97440122" path="/var/lib/kubelet/pods/3f208941-5fa3-4b8d-baa6-0f1c97440122/volumes" Nov 28 17:55:27 crc kubenswrapper[4710]: I1128 17:55:27.723179 4710 generic.go:334] "Generic (PLEG): container finished" podID="bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9" containerID="bf5e3fb29bb6688d02f7fa0367c9d5bb1cd7301edb1630cbcbd95d1e10515827" exitCode=0 Nov 28 17:55:27 crc kubenswrapper[4710]: I1128 17:55:27.723236 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/crc-debug-nphdg" event={"ID":"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9","Type":"ContainerDied","Data":"bf5e3fb29bb6688d02f7fa0367c9d5bb1cd7301edb1630cbcbd95d1e10515827"} Nov 28 17:55:27 crc kubenswrapper[4710]: I1128 17:55:27.723275 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/crc-debug-nphdg" event={"ID":"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9","Type":"ContainerStarted","Data":"a34d47704e232b94c0ce3488693fba198477bd802362c5a4d84d16d433a38594"} Nov 28 17:55:27 crc kubenswrapper[4710]: I1128 17:55:27.775180 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-22qbx/crc-debug-nphdg"] Nov 28 17:55:27 crc kubenswrapper[4710]: I1128 17:55:27.786913 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-22qbx/crc-debug-nphdg"] Nov 28 17:55:28 crc kubenswrapper[4710]: I1128 17:55:28.846700 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:28 crc kubenswrapper[4710]: I1128 17:55:28.937525 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-host\") pod \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\" (UID: \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\") " Nov 28 17:55:28 crc kubenswrapper[4710]: I1128 17:55:28.937686 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-host" (OuterVolumeSpecName: "host") pod "bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9" (UID: "bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 28 17:55:28 crc kubenswrapper[4710]: I1128 17:55:28.937731 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkpk4\" (UniqueName: \"kubernetes.io/projected/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-kube-api-access-dkpk4\") pod \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\" (UID: \"bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9\") " Nov 28 17:55:28 crc kubenswrapper[4710]: I1128 17:55:28.944935 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-kube-api-access-dkpk4" (OuterVolumeSpecName: "kube-api-access-dkpk4") pod "bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9" (UID: "bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9"). InnerVolumeSpecName "kube-api-access-dkpk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:55:29 crc kubenswrapper[4710]: I1128 17:55:29.040472 4710 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-host\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:29 crc kubenswrapper[4710]: I1128 17:55:29.040551 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkpk4\" (UniqueName: \"kubernetes.io/projected/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9-kube-api-access-dkpk4\") on node \"crc\" DevicePath \"\"" Nov 28 17:55:29 crc kubenswrapper[4710]: I1128 17:55:29.154789 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9" path="/var/lib/kubelet/pods/bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9/volumes" Nov 28 17:55:29 crc kubenswrapper[4710]: I1128 17:55:29.749317 4710 scope.go:117] "RemoveContainer" containerID="bf5e3fb29bb6688d02f7fa0367c9d5bb1cd7301edb1630cbcbd95d1e10515827" Nov 28 17:55:29 crc kubenswrapper[4710]: I1128 17:55:29.749381 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/crc-debug-nphdg" Nov 28 17:55:42 crc kubenswrapper[4710]: I1128 17:55:42.597570 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-757985fd5d-pvjnf_c052297b-c856-44c2-8fd2-66f76671785b/barbican-api/0.log" Nov 28 17:55:42 crc kubenswrapper[4710]: I1128 17:55:42.713700 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-757985fd5d-pvjnf_c052297b-c856-44c2-8fd2-66f76671785b/barbican-api-log/0.log" Nov 28 17:55:42 crc kubenswrapper[4710]: I1128 17:55:42.783981 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f976d8c48-8849p_e5a6ae13-4584-4438-a7eb-fd33a80e8ee7/barbican-keystone-listener/0.log" Nov 28 17:55:42 crc kubenswrapper[4710]: I1128 17:55:42.868558 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5f976d8c48-8849p_e5a6ae13-4584-4438-a7eb-fd33a80e8ee7/barbican-keystone-listener-log/0.log" Nov 28 17:55:42 crc kubenswrapper[4710]: I1128 17:55:42.935737 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-676bbb9799-m7pq6_3a0e62fb-f82d-4585-8c51-9c3d947027e9/barbican-worker/0.log" Nov 28 17:55:42 crc kubenswrapper[4710]: I1128 17:55:42.996934 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-676bbb9799-m7pq6_3a0e62fb-f82d-4585-8c51-9c3d947027e9/barbican-worker-log/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.122450 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-g5j7b_24989137-409c-4abb-96da-a28e2382b122/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.212351 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6ebdff21-cac4-4864-8bc5-47c8d8ca30ca/ceilometer-central-agent/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.245577 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6ebdff21-cac4-4864-8bc5-47c8d8ca30ca/ceilometer-notification-agent/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.346382 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.346436 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.377574 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6ebdff21-cac4-4864-8bc5-47c8d8ca30ca/sg-core/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.379053 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_6ebdff21-cac4-4864-8bc5-47c8d8ca30ca/proxy-httpd/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.429088 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_02dda5a0-8c02-4b9e-a122-573bc14ef753/cinder-api/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.625101 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_02dda5a0-8c02-4b9e-a122-573bc14ef753/cinder-api-log/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.687868 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5/cinder-scheduler/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.699557 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7dcb222e-0e19-4ab3-bb78-a7b8ebc23aa5/probe/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.863980 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ql95k_6fc16997-7ac9-4f0f-aec1-32bed7b875b0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:43 crc kubenswrapper[4710]: I1128 17:55:43.915620 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-qnhj9_8ea0b283-a909-4071-b414-acf02181dc0f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.054788 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-k8mql_9d817523-77e3-415b-9606-89cfcede076e/init/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.260970 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-k8mql_9d817523-77e3-415b-9606-89cfcede076e/init/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.268436 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-k8mql_9d817523-77e3-415b-9606-89cfcede076e/dnsmasq-dns/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.278446 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-d5xgp_e16d30ed-d490-425c-804b-c633d6286195/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.444475 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fa610e74-7719-43b5-ae08-ea611158b446/glance-httpd/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.481439 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_fa610e74-7719-43b5-ae08-ea611158b446/glance-log/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.663944 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_361b0d95-8489-4799-bc9b-a6232aee65d3/glance-log/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.697579 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_361b0d95-8489-4799-bc9b-a6232aee65d3/glance-httpd/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.828986 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-fj7kn_9d978938-7c7b-4b24-92a4-dda564a4d288/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:44 crc kubenswrapper[4710]: I1128 17:55:44.906080 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-gv2n2_762129bb-bd6f-46a3-87e5-38b37476e994/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:45 crc kubenswrapper[4710]: I1128 17:55:45.204270 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_9439f76f-1d85-4e4a-86a6-0b86e169712b/kube-state-metrics/0.log" Nov 28 17:55:45 crc kubenswrapper[4710]: I1128 17:55:45.248038 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7559b9d56c-625td_c67d7e30-dd12-4650-9063-cb49b972e3b5/keystone-api/0.log" Nov 28 17:55:45 crc kubenswrapper[4710]: I1128 17:55:45.418934 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5zqvr_04db3c20-a29b-4288-9ee7-4739e0796595/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:45 crc kubenswrapper[4710]: I1128 17:55:45.438391 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-xxssf_834f349e-2478-4abd-b6a1-0d413728889f/logging-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:45 crc kubenswrapper[4710]: I1128 17:55:45.890873 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-654d6f49b5-qjswk_8c44bf34-558b-4635-9122-b144d09c7085/neutron-httpd/0.log" Nov 28 17:55:45 crc kubenswrapper[4710]: I1128 17:55:45.896647 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-654d6f49b5-qjswk_8c44bf34-558b-4635-9122-b144d09c7085/neutron-api/0.log" Nov 28 17:55:46 crc kubenswrapper[4710]: I1128 17:55:46.045784 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-c76pv_915c1bf8-3797-4c5a-a991-45be0aab70b9/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:46 crc kubenswrapper[4710]: I1128 17:55:46.555968 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_4feeed2e-20e0-49a9-8448-2805a2f332e2/nova-cell0-conductor-conductor/0.log" Nov 28 17:55:46 crc kubenswrapper[4710]: I1128 17:55:46.556336 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b21809f6-0359-4e17-b098-3002764c13c4/nova-api-log/0.log" Nov 28 17:55:46 crc kubenswrapper[4710]: I1128 17:55:46.666083 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b21809f6-0359-4e17-b098-3002764c13c4/nova-api-api/0.log" Nov 28 17:55:46 crc kubenswrapper[4710]: I1128 17:55:46.780711 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_64b01ab0-53fd-4ada-897c-3a84952a9fb9/nova-cell1-conductor-conductor/0.log" Nov 28 17:55:46 crc kubenswrapper[4710]: I1128 17:55:46.895513 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_64d64187-1205-4085-8084-39e9b4c2efec/nova-cell1-novncproxy-novncproxy/0.log" Nov 28 17:55:47 crc kubenswrapper[4710]: I1128 17:55:47.013835 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-57c5v_40b0849f-9e1d-4ced-83bd-af1db06a347c/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:47 crc kubenswrapper[4710]: I1128 17:55:47.296899 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6f0fff04-08c6-4268-8534-fa5b2e28e58f/nova-metadata-log/0.log" Nov 28 17:55:47 crc kubenswrapper[4710]: I1128 17:55:47.412101 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_362c81cb-3e82-49e0-be70-7206bcd8ebe8/nova-scheduler-scheduler/0.log" Nov 28 17:55:47 crc kubenswrapper[4710]: I1128 17:55:47.514503 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_140993a2-eccd-471d-a0ce-df4600f96e20/mysql-bootstrap/0.log" Nov 28 17:55:47 crc kubenswrapper[4710]: I1128 17:55:47.713624 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_140993a2-eccd-471d-a0ce-df4600f96e20/mysql-bootstrap/0.log" Nov 28 17:55:47 crc kubenswrapper[4710]: I1128 17:55:47.740633 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_140993a2-eccd-471d-a0ce-df4600f96e20/galera/0.log" Nov 28 17:55:47 crc kubenswrapper[4710]: I1128 17:55:47.915411 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aa87ab33-407c-463c-8f9e-79eb5e55c981/mysql-bootstrap/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.108628 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aa87ab33-407c-463c-8f9e-79eb5e55c981/mysql-bootstrap/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.152899 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_aa87ab33-407c-463c-8f9e-79eb5e55c981/galera/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.228068 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6f0fff04-08c6-4268-8534-fa5b2e28e58f/nova-metadata-metadata/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.354573 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_4795b5d0-66f8-4392-8496-494fad8e7e69/openstackclient/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.434751 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4h2ch_c9a14e8a-2aba-4827-8ff4-48858bec6075/ovn-controller/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.558620 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-48css_1cd28302-c515-4e75-8092-cc99b132bc7e/openstack-network-exporter/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.676334 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t2rdj_8704135f-2602-4980-bdf2-875f4a9391e3/ovsdb-server-init/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.847478 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t2rdj_8704135f-2602-4980-bdf2-875f4a9391e3/ovsdb-server-init/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.869255 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t2rdj_8704135f-2602-4980-bdf2-875f4a9391e3/ovsdb-server/0.log" Nov 28 17:55:48 crc kubenswrapper[4710]: I1128 17:55:48.934017 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-t2rdj_8704135f-2602-4980-bdf2-875f4a9391e3/ovs-vswitchd/0.log" Nov 28 17:55:49 crc kubenswrapper[4710]: I1128 17:55:49.219311 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-np6vs_17b2ab0e-183f-433e-a79f-09d25daa2cd5/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:49 crc kubenswrapper[4710]: I1128 17:55:49.286235 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_88021b14-adad-452b-af97-74186171d987/ovn-northd/0.log" Nov 28 17:55:49 crc kubenswrapper[4710]: I1128 17:55:49.349800 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_88021b14-adad-452b-af97-74186171d987/openstack-network-exporter/0.log" Nov 28 17:55:49 crc kubenswrapper[4710]: I1128 17:55:49.535895 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_05caeb9e-2c7b-4199-9bb9-3611e4eb3f21/ovsdbserver-nb/0.log" Nov 28 17:55:49 crc kubenswrapper[4710]: I1128 17:55:49.553838 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_05caeb9e-2c7b-4199-9bb9-3611e4eb3f21/openstack-network-exporter/0.log" Nov 28 17:55:49 crc kubenswrapper[4710]: I1128 17:55:49.741071 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4f8f21ee-4b67-4bd1-b46d-46c95015c134/openstack-network-exporter/0.log" Nov 28 17:55:49 crc kubenswrapper[4710]: I1128 17:55:49.757433 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_4f8f21ee-4b67-4bd1-b46d-46c95015c134/ovsdbserver-sb/0.log" Nov 28 17:55:49 crc kubenswrapper[4710]: I1128 17:55:49.867442 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-664bc7f8c8-z9vbx_b4930075-1fb1-4342-af3e-62e0c0f249d1/placement-api/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.052326 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_192d1577-8f40-4d1b-bc83-a7cb9d88e388/setup-container/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.058879 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-664bc7f8c8-z9vbx_b4930075-1fb1-4342-af3e-62e0c0f249d1/placement-log/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.234984 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_192d1577-8f40-4d1b-bc83-a7cb9d88e388/setup-container/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.257819 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_192d1577-8f40-4d1b-bc83-a7cb9d88e388/rabbitmq/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.321911 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9e35eae-e3e4-43df-83fb-4a2233406e73/setup-container/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.519149 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9e35eae-e3e4-43df-83fb-4a2233406e73/setup-container/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.527609 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-6vk6n_2a1938e5-0e94-4679-a7d1-d9d9b45681c5/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.610357 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a9e35eae-e3e4-43df-83fb-4a2233406e73/rabbitmq/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.713143 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-h8xmd_ac2e80b6-e6a4-4e45-bc6f-85c2425ff46e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.810192 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-lft6v_632b6913-e5ef-4e0a-8054-ba62795a3a32/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:50 crc kubenswrapper[4710]: I1128 17:55:50.982043 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-xkv6h_05c25761-79e7-4b39-985a-16705cbb29ae/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.059029 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_13db620f-d83a-4477-b98f-28c38017533c/memcached/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.073365 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-6dkfn_fba211eb-e531-4ebb-941c-5bd4c61b9a3b/ssh-known-hosts-edpm-deployment/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.262445 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6459d5bc5f-vhnpr_56843354-a30a-4997-8f6f-0210e3980dc4/proxy-server/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.270186 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6459d5bc5f-vhnpr_56843354-a30a-4997-8f6f-0210e3980dc4/proxy-httpd/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.296219 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-kmxkk_2b3dc001-22a3-4390-8d90-6769b184d2a0/swift-ring-rebalance/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.498123 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/account-replicator/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.517590 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/account-auditor/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.547595 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/account-reaper/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.566467 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/account-server/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.619302 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/container-auditor/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.703027 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/container-server/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.727389 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/container-updater/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.741232 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/container-replicator/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.752348 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/object-auditor/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.810677 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/object-expirer/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.864647 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/object-replicator/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.893047 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/object-server/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.916585 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/object-updater/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.968525 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/swift-recon-cron/0.log" Nov 28 17:55:51 crc kubenswrapper[4710]: I1128 17:55:51.988408 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_96a67841-bed8-4758-a152-31602db98d49/rsync/0.log" Nov 28 17:55:52 crc kubenswrapper[4710]: I1128 17:55:52.120802 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-jkcm7_6713c8fc-ccd2-4956-8102-4d888af17897/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:55:52 crc kubenswrapper[4710]: I1128 17:55:52.150099 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_e6e3da65-b095-4e28-9fab-5a481096c743/tempest-tests-tempest-tests-runner/0.log" Nov 28 17:55:52 crc kubenswrapper[4710]: I1128 17:55:52.269919 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_218e86f8-62d5-49f2-83fd-9f63432aef22/test-operator-logs-container/0.log" Nov 28 17:55:52 crc kubenswrapper[4710]: I1128 17:55:52.340097 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-9gxkv_599fb57d-7ff9-42b2-bee1-30f542a56d12/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 28 17:56:13 crc kubenswrapper[4710]: I1128 17:56:13.343617 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:56:13 crc kubenswrapper[4710]: I1128 17:56:13.344063 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:56:13 crc kubenswrapper[4710]: I1128 17:56:13.344104 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:56:13 crc kubenswrapper[4710]: I1128 17:56:13.344918 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9faa162fa2d0e90421242c87e8957b2d01034457183612706fea687b37d5e765"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:56:13 crc kubenswrapper[4710]: I1128 17:56:13.344973 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://9faa162fa2d0e90421242c87e8957b2d01034457183612706fea687b37d5e765" gracePeriod=600 Nov 28 17:56:14 crc kubenswrapper[4710]: I1128 17:56:14.219502 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="9faa162fa2d0e90421242c87e8957b2d01034457183612706fea687b37d5e765" exitCode=0 Nov 28 17:56:14 crc kubenswrapper[4710]: I1128 17:56:14.219595 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"9faa162fa2d0e90421242c87e8957b2d01034457183612706fea687b37d5e765"} Nov 28 17:56:14 crc kubenswrapper[4710]: I1128 17:56:14.219880 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db"} Nov 28 17:56:14 crc kubenswrapper[4710]: I1128 17:56:14.219925 4710 scope.go:117] "RemoveContainer" containerID="018bf19fcf866736a5dd9c36bd8ba30de168aa9c9da69e094c36f23d86c9abfe" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.119106 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-s7xmc_98f1d4c3-68b2-42b6-bbfa-e8aaec209764/kube-rbac-proxy/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.302045 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-s7xmc_98f1d4c3-68b2-42b6-bbfa-e8aaec209764/manager/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.387064 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7hsvg_a70892da-8396-4018-89e0-f25e7221e674/kube-rbac-proxy/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.388236 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-7hsvg_a70892da-8396-4018-89e0-f25e7221e674/manager/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.580670 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-q8vpd_bafb8518-b399-4fe2-9577-8bb606450832/kube-rbac-proxy/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.618928 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-q8vpd_bafb8518-b399-4fe2-9577-8bb606450832/manager/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.713877 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6_36ceecc9-0707-4f74-aa62-94ffa7887814/util/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.879634 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6_36ceecc9-0707-4f74-aa62-94ffa7887814/pull/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.929953 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6_36ceecc9-0707-4f74-aa62-94ffa7887814/util/0.log" Nov 28 17:56:16 crc kubenswrapper[4710]: I1128 17:56:16.987259 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6_36ceecc9-0707-4f74-aa62-94ffa7887814/pull/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.084378 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6_36ceecc9-0707-4f74-aa62-94ffa7887814/util/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.120117 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6_36ceecc9-0707-4f74-aa62-94ffa7887814/pull/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.135225 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ed5e0be888e23f9579a6bd19880889d1765c2aaaae1249ca08e7d99acb6gxj6_36ceecc9-0707-4f74-aa62-94ffa7887814/extract/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.382157 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-xxmrh_377d6817-3f41-4bba-9078-fa77dcdb9591/manager/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.416706 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-668d9c48b9-xxmrh_377d6817-3f41-4bba-9078-fa77dcdb9591/kube-rbac-proxy/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.472864 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-sbhc4_448f2efe-7d9c-476e-af1c-3ebf62e2b6cb/kube-rbac-proxy/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.587533 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-sbhc4_448f2efe-7d9c-476e-af1c-3ebf62e2b6cb/manager/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.648488 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-2gpds_6ebfa717-92f8-4563-9456-644d1c107d6b/kube-rbac-proxy/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.711376 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-2gpds_6ebfa717-92f8-4563-9456-644d1c107d6b/manager/0.log" Nov 28 17:56:17 crc kubenswrapper[4710]: I1128 17:56:17.882197 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-sns94_baf8a76b-04b8-45d7-83b8-49ab823f2af1/kube-rbac-proxy/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.051394 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-tkzbw_a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf/kube-rbac-proxy/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.059327 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-sns94_baf8a76b-04b8-45d7-83b8-49ab823f2af1/manager/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.117185 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-tkzbw_a0bfa90b-f373-4b3b-be2e-fb3c7d6d9abf/manager/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.245213 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-6h9mk_81c851e8-e354-40c6-84cf-264f22be561f/kube-rbac-proxy/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.545485 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-546d4bdf48-6h9mk_81c851e8-e354-40c6-84cf-264f22be561f/manager/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.571721 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-bcg9d_a66ff16d-f7e8-42d1-9b40-e992fd3aabb2/kube-rbac-proxy/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.732436 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6546668bfd-bcg9d_a66ff16d-f7e8-42d1-9b40-e992fd3aabb2/manager/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.785773 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-hsntq_92a0ce9b-b234-4954-bf20-890fa1a6785d/manager/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.825487 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-hsntq_92a0ce9b-b234-4954-bf20-890fa1a6785d/kube-rbac-proxy/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.993154 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-wd77l_faacb861-2d5b-4629-8c6b-ae9427266b7b/manager/0.log" Nov 28 17:56:18 crc kubenswrapper[4710]: I1128 17:56:18.995415 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-wd77l_faacb861-2d5b-4629-8c6b-ae9427266b7b/kube-rbac-proxy/0.log" Nov 28 17:56:19 crc kubenswrapper[4710]: I1128 17:56:19.150892 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-867v6_b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c/kube-rbac-proxy/0.log" Nov 28 17:56:19 crc kubenswrapper[4710]: I1128 17:56:19.265000 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-867v6_b6f2f02a-bbb3-40af-ba4c-8aeb7867b54c/manager/0.log" Nov 28 17:56:19 crc kubenswrapper[4710]: I1128 17:56:19.353248 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-bjrnl_5a6d5b4b-1460-41a8-a248-e814e32fb672/kube-rbac-proxy/0.log" Nov 28 17:56:19 crc kubenswrapper[4710]: I1128 17:56:19.383102 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-bjrnl_5a6d5b4b-1460-41a8-a248-e814e32fb672/manager/0.log" Nov 28 17:56:19 crc kubenswrapper[4710]: I1128 17:56:19.453948 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx_ee89a2e2-f64c-4310-a271-8d4e7043279a/kube-rbac-proxy/0.log" Nov 28 17:56:19 crc kubenswrapper[4710]: I1128 17:56:19.528320 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-64bc77cfd4j7nwx_ee89a2e2-f64c-4310-a271-8d4e7043279a/manager/0.log" Nov 28 17:56:19 crc kubenswrapper[4710]: I1128 17:56:19.913898 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-dlp9m_102b7cf3-c9f4-47f9-8472-b3659a7c9b4a/registry-server/0.log" Nov 28 17:56:19 crc kubenswrapper[4710]: I1128 17:56:19.934003 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-96cfcb97f-22bhn_01c82b0a-0363-428f-83ad-77949cd978cb/operator/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.129510 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-2c9kf_3c2144e6-7894-4e16-9952-f4a4d848aa55/kube-rbac-proxy/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.173902 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-2c9kf_3c2144e6-7894-4e16-9952-f4a4d848aa55/manager/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.279607 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-45gjt_5755fe75-0e8f-4b17-ab96-1efe5ace8c0f/kube-rbac-proxy/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.420240 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-45gjt_5755fe75-0e8f-4b17-ab96-1efe5ace8c0f/manager/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.465482 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-z7ndb_e557836a-92e3-47e0-8a29-e02ab29a9aea/operator/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.687015 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-hznck_419588b7-987b-44f5-81fd-76451ba0eb2d/manager/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.688695 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-hznck_419588b7-987b-44f5-81fd-76451ba0eb2d/kube-rbac-proxy/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.722708 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-668879d68f-pd88h_61cb335c-2597-42e6-aa4c-410d8881b903/manager/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.775196 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6b5d64d475-6p56z_5c695701-bc1a-4210-87ca-9ee354e664bc/kube-rbac-proxy/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.958191 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6b5d64d475-6p56z_5c695701-bc1a-4210-87ca-9ee354e664bc/manager/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.967609 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-22spv_b3e15c80-d7b6-4d62-9eff-011dee6d7b6e/kube-rbac-proxy/0.log" Nov 28 17:56:20 crc kubenswrapper[4710]: I1128 17:56:20.977535 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-22spv_b3e15c80-d7b6-4d62-9eff-011dee6d7b6e/manager/0.log" Nov 28 17:56:21 crc kubenswrapper[4710]: I1128 17:56:21.078349 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-rxp9t_e31192ae-8aa1-4376-a40b-4bd8e0e45928/kube-rbac-proxy/0.log" Nov 28 17:56:21 crc kubenswrapper[4710]: I1128 17:56:21.128419 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-rxp9t_e31192ae-8aa1-4376-a40b-4bd8e0e45928/manager/0.log" Nov 28 17:56:41 crc kubenswrapper[4710]: I1128 17:56:41.780058 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-d8tl4_e7dde429-e84e-48dd-a0dc-1bb66d082748/control-plane-machine-set-operator/0.log" Nov 28 17:56:41 crc kubenswrapper[4710]: I1128 17:56:41.941919 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4bldc_8b390d2f-0343-4f77-a3a3-196d446347cb/kube-rbac-proxy/0.log" Nov 28 17:56:41 crc kubenswrapper[4710]: I1128 17:56:41.984842 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4bldc_8b390d2f-0343-4f77-a3a3-196d446347cb/machine-api-operator/0.log" Nov 28 17:56:56 crc kubenswrapper[4710]: I1128 17:56:56.131038 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-ns9h5_4e9ab145-a3a5-49a1-8c9f-b7ee399dddf9/cert-manager-controller/0.log" Nov 28 17:56:56 crc kubenswrapper[4710]: I1128 17:56:56.267301 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-cbsnk_2a677c3f-bd3b-4381-893a-e38debf47432/cert-manager-cainjector/0.log" Nov 28 17:56:56 crc kubenswrapper[4710]: I1128 17:56:56.293299 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-kkqp9_67f3d046-0b7e-4f0f-8d7b-b02acc495a44/cert-manager-webhook/0.log" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.436844 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2kn6p"] Nov 28 17:57:05 crc kubenswrapper[4710]: E1128 17:57:05.437916 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9" containerName="container-00" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.437936 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9" containerName="container-00" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.438164 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6943d2-1c43-4d5b-b1e2-d0aeb1d54eb9" containerName="container-00" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.439807 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.476920 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kn6p"] Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.497428 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-catalog-content\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.497665 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-utilities\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.497720 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76828\" (UniqueName: \"kubernetes.io/projected/a1e506a3-be3c-4213-9923-304162c78082-kube-api-access-76828\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.601712 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-utilities\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.602206 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76828\" (UniqueName: \"kubernetes.io/projected/a1e506a3-be3c-4213-9923-304162c78082-kube-api-access-76828\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.602455 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-catalog-content\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.602802 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-utilities\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.603895 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-catalog-content\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.635824 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76828\" (UniqueName: \"kubernetes.io/projected/a1e506a3-be3c-4213-9923-304162c78082-kube-api-access-76828\") pod \"redhat-operators-2kn6p\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:05 crc kubenswrapper[4710]: I1128 17:57:05.769725 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:06 crc kubenswrapper[4710]: W1128 17:57:06.226398 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1e506a3_be3c_4213_9923_304162c78082.slice/crio-1f06d1bb4f71d1a6696b4d9af84096a3001fc51720e24adbf1b6dc9d906fc158 WatchSource:0}: Error finding container 1f06d1bb4f71d1a6696b4d9af84096a3001fc51720e24adbf1b6dc9d906fc158: Status 404 returned error can't find the container with id 1f06d1bb4f71d1a6696b4d9af84096a3001fc51720e24adbf1b6dc9d906fc158 Nov 28 17:57:06 crc kubenswrapper[4710]: I1128 17:57:06.241470 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kn6p"] Nov 28 17:57:06 crc kubenswrapper[4710]: I1128 17:57:06.780869 4710 generic.go:334] "Generic (PLEG): container finished" podID="a1e506a3-be3c-4213-9923-304162c78082" containerID="ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10" exitCode=0 Nov 28 17:57:06 crc kubenswrapper[4710]: I1128 17:57:06.781128 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kn6p" event={"ID":"a1e506a3-be3c-4213-9923-304162c78082","Type":"ContainerDied","Data":"ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10"} Nov 28 17:57:06 crc kubenswrapper[4710]: I1128 17:57:06.781153 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kn6p" event={"ID":"a1e506a3-be3c-4213-9923-304162c78082","Type":"ContainerStarted","Data":"1f06d1bb4f71d1a6696b4d9af84096a3001fc51720e24adbf1b6dc9d906fc158"} Nov 28 17:57:08 crc kubenswrapper[4710]: I1128 17:57:08.803247 4710 generic.go:334] "Generic (PLEG): container finished" podID="a1e506a3-be3c-4213-9923-304162c78082" containerID="6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9" exitCode=0 Nov 28 17:57:08 crc kubenswrapper[4710]: I1128 17:57:08.803320 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kn6p" event={"ID":"a1e506a3-be3c-4213-9923-304162c78082","Type":"ContainerDied","Data":"6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9"} Nov 28 17:57:09 crc kubenswrapper[4710]: I1128 17:57:09.816908 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kn6p" event={"ID":"a1e506a3-be3c-4213-9923-304162c78082","Type":"ContainerStarted","Data":"67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9"} Nov 28 17:57:09 crc kubenswrapper[4710]: I1128 17:57:09.840163 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2kn6p" podStartSLOduration=2.308955702 podStartE2EDuration="4.840146603s" podCreationTimestamp="2025-11-28 17:57:05 +0000 UTC" firstStartedPulling="2025-11-28 17:57:06.783122831 +0000 UTC m=+3516.041422876" lastFinishedPulling="2025-11-28 17:57:09.314313732 +0000 UTC m=+3518.572613777" observedRunningTime="2025-11-28 17:57:09.832581003 +0000 UTC m=+3519.090881048" watchObservedRunningTime="2025-11-28 17:57:09.840146603 +0000 UTC m=+3519.098446648" Nov 28 17:57:11 crc kubenswrapper[4710]: I1128 17:57:11.667749 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-tmz97_af5831ae-b1bc-4a39-b1bb-6e3c8fb27e0e/nmstate-console-plugin/0.log" Nov 28 17:57:11 crc kubenswrapper[4710]: I1128 17:57:11.851459 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kjwqj_6e3b7f00-c71e-4a41-82db-9b1910f3233d/nmstate-handler/0.log" Nov 28 17:57:11 crc kubenswrapper[4710]: I1128 17:57:11.869022 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-7gkl2_7a7eea14-e168-46b6-a7e8-2d910b465c4c/kube-rbac-proxy/0.log" Nov 28 17:57:11 crc kubenswrapper[4710]: I1128 17:57:11.891847 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-7gkl2_7a7eea14-e168-46b6-a7e8-2d910b465c4c/nmstate-metrics/0.log" Nov 28 17:57:12 crc kubenswrapper[4710]: I1128 17:57:12.148142 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-p7629_18adf227-ae9c-403d-8fe0-107fdf1c2e76/nmstate-operator/0.log" Nov 28 17:57:12 crc kubenswrapper[4710]: I1128 17:57:12.167180 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-6l6rt_878067d5-b960-4b2e-915c-89c96da9bbc8/nmstate-webhook/0.log" Nov 28 17:57:14 crc kubenswrapper[4710]: I1128 17:57:14.889101 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lth46"] Nov 28 17:57:14 crc kubenswrapper[4710]: I1128 17:57:14.891912 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:14 crc kubenswrapper[4710]: I1128 17:57:14.898197 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lth46"] Nov 28 17:57:14 crc kubenswrapper[4710]: I1128 17:57:14.945845 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2cnp\" (UniqueName: \"kubernetes.io/projected/0eee47f6-82ff-4dcd-b69f-1007e97d651d-kube-api-access-q2cnp\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:14 crc kubenswrapper[4710]: I1128 17:57:14.945904 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-catalog-content\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:14 crc kubenswrapper[4710]: I1128 17:57:14.946203 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-utilities\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.047817 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2cnp\" (UniqueName: \"kubernetes.io/projected/0eee47f6-82ff-4dcd-b69f-1007e97d651d-kube-api-access-q2cnp\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.047877 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-catalog-content\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.047976 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-utilities\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.048456 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-catalog-content\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.048497 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-utilities\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.086796 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2cnp\" (UniqueName: \"kubernetes.io/projected/0eee47f6-82ff-4dcd-b69f-1007e97d651d-kube-api-access-q2cnp\") pod \"community-operators-lth46\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.219861 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.770304 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:15 crc kubenswrapper[4710]: I1128 17:57:15.770792 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:16 crc kubenswrapper[4710]: I1128 17:57:16.099302 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lth46"] Nov 28 17:57:16 crc kubenswrapper[4710]: I1128 17:57:16.845694 4710 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kn6p" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="registry-server" probeResult="failure" output=< Nov 28 17:57:16 crc kubenswrapper[4710]: timeout: failed to connect service ":50051" within 1s Nov 28 17:57:16 crc kubenswrapper[4710]: > Nov 28 17:57:17 crc kubenswrapper[4710]: I1128 17:57:17.075486 4710 generic.go:334] "Generic (PLEG): container finished" podID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerID="94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2" exitCode=0 Nov 28 17:57:17 crc kubenswrapper[4710]: I1128 17:57:17.075560 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lth46" event={"ID":"0eee47f6-82ff-4dcd-b69f-1007e97d651d","Type":"ContainerDied","Data":"94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2"} Nov 28 17:57:17 crc kubenswrapper[4710]: I1128 17:57:17.075604 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lth46" event={"ID":"0eee47f6-82ff-4dcd-b69f-1007e97d651d","Type":"ContainerStarted","Data":"a270e7db4016ed737fce0d35d8276eee999217ad55decb654d7f7e2d76013943"} Nov 28 17:57:17 crc kubenswrapper[4710]: I1128 17:57:17.078967 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 17:57:18 crc kubenswrapper[4710]: I1128 17:57:18.089087 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lth46" event={"ID":"0eee47f6-82ff-4dcd-b69f-1007e97d651d","Type":"ContainerStarted","Data":"9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a"} Nov 28 17:57:19 crc kubenswrapper[4710]: I1128 17:57:19.102533 4710 generic.go:334] "Generic (PLEG): container finished" podID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerID="9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a" exitCode=0 Nov 28 17:57:19 crc kubenswrapper[4710]: I1128 17:57:19.102679 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lth46" event={"ID":"0eee47f6-82ff-4dcd-b69f-1007e97d651d","Type":"ContainerDied","Data":"9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a"} Nov 28 17:57:20 crc kubenswrapper[4710]: I1128 17:57:20.119571 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lth46" event={"ID":"0eee47f6-82ff-4dcd-b69f-1007e97d651d","Type":"ContainerStarted","Data":"cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31"} Nov 28 17:57:20 crc kubenswrapper[4710]: I1128 17:57:20.170994 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lth46" podStartSLOduration=3.457771657 podStartE2EDuration="6.170960988s" podCreationTimestamp="2025-11-28 17:57:14 +0000 UTC" firstStartedPulling="2025-11-28 17:57:17.078498262 +0000 UTC m=+3526.336798317" lastFinishedPulling="2025-11-28 17:57:19.791687603 +0000 UTC m=+3529.049987648" observedRunningTime="2025-11-28 17:57:20.150781578 +0000 UTC m=+3529.409081663" watchObservedRunningTime="2025-11-28 17:57:20.170960988 +0000 UTC m=+3529.429261063" Nov 28 17:57:25 crc kubenswrapper[4710]: I1128 17:57:25.220344 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:25 crc kubenswrapper[4710]: I1128 17:57:25.221034 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:25 crc kubenswrapper[4710]: I1128 17:57:25.289876 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:25 crc kubenswrapper[4710]: I1128 17:57:25.846311 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:25 crc kubenswrapper[4710]: I1128 17:57:25.914440 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:26 crc kubenswrapper[4710]: I1128 17:57:26.268927 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:26 crc kubenswrapper[4710]: I1128 17:57:26.733995 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kn6p"] Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.208486 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2kn6p" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="registry-server" containerID="cri-o://67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9" gracePeriod=2 Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.695173 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.770883 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-catalog-content\") pod \"a1e506a3-be3c-4213-9923-304162c78082\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.770957 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-utilities\") pod \"a1e506a3-be3c-4213-9923-304162c78082\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.770995 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76828\" (UniqueName: \"kubernetes.io/projected/a1e506a3-be3c-4213-9923-304162c78082-kube-api-access-76828\") pod \"a1e506a3-be3c-4213-9923-304162c78082\" (UID: \"a1e506a3-be3c-4213-9923-304162c78082\") " Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.771536 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-utilities" (OuterVolumeSpecName: "utilities") pod "a1e506a3-be3c-4213-9923-304162c78082" (UID: "a1e506a3-be3c-4213-9923-304162c78082"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.776749 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e506a3-be3c-4213-9923-304162c78082-kube-api-access-76828" (OuterVolumeSpecName: "kube-api-access-76828") pod "a1e506a3-be3c-4213-9923-304162c78082" (UID: "a1e506a3-be3c-4213-9923-304162c78082"). InnerVolumeSpecName "kube-api-access-76828". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.872968 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.873001 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76828\" (UniqueName: \"kubernetes.io/projected/a1e506a3-be3c-4213-9923-304162c78082-kube-api-access-76828\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.882111 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1e506a3-be3c-4213-9923-304162c78082" (UID: "a1e506a3-be3c-4213-9923-304162c78082"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:27 crc kubenswrapper[4710]: I1128 17:57:27.974896 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1e506a3-be3c-4213-9923-304162c78082-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.220271 4710 generic.go:334] "Generic (PLEG): container finished" podID="a1e506a3-be3c-4213-9923-304162c78082" containerID="67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9" exitCode=0 Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.220340 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kn6p" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.220326 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kn6p" event={"ID":"a1e506a3-be3c-4213-9923-304162c78082","Type":"ContainerDied","Data":"67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9"} Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.220809 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kn6p" event={"ID":"a1e506a3-be3c-4213-9923-304162c78082","Type":"ContainerDied","Data":"1f06d1bb4f71d1a6696b4d9af84096a3001fc51720e24adbf1b6dc9d906fc158"} Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.220894 4710 scope.go:117] "RemoveContainer" containerID="67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.250376 4710 scope.go:117] "RemoveContainer" containerID="6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.261951 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kn6p"] Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.274861 4710 scope.go:117] "RemoveContainer" containerID="ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.279284 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2kn6p"] Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.339988 4710 scope.go:117] "RemoveContainer" containerID="67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9" Nov 28 17:57:28 crc kubenswrapper[4710]: E1128 17:57:28.340669 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9\": container with ID starting with 67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9 not found: ID does not exist" containerID="67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.340707 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9"} err="failed to get container status \"67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9\": rpc error: code = NotFound desc = could not find container \"67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9\": container with ID starting with 67e26f25fa65cc959ad724a4c0174c197c4265ce58a38a74929c397f8d41e6e9 not found: ID does not exist" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.340728 4710 scope.go:117] "RemoveContainer" containerID="6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9" Nov 28 17:57:28 crc kubenswrapper[4710]: E1128 17:57:28.342782 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9\": container with ID starting with 6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9 not found: ID does not exist" containerID="6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.342825 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9"} err="failed to get container status \"6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9\": rpc error: code = NotFound desc = could not find container \"6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9\": container with ID starting with 6e34d1c8034e83f32dc7f6416f7d0008d2c37fa164871d247bd894cba77845e9 not found: ID does not exist" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.342855 4710 scope.go:117] "RemoveContainer" containerID="ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10" Nov 28 17:57:28 crc kubenswrapper[4710]: E1128 17:57:28.343133 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10\": container with ID starting with ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10 not found: ID does not exist" containerID="ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.343161 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10"} err="failed to get container status \"ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10\": rpc error: code = NotFound desc = could not find container \"ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10\": container with ID starting with ca4375a2a34928c79089fc34c0e8a297532d704c1a13584fa0fabea9e2703c10 not found: ID does not exist" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.530623 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lth46"] Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.531709 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lth46" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerName="registry-server" containerID="cri-o://cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31" gracePeriod=2 Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.587532 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-867dcf9474-l79hr_13835a45-f211-4e69-bccd-98ef4e8a5594/manager/0.log" Nov 28 17:57:28 crc kubenswrapper[4710]: I1128 17:57:28.602226 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-867dcf9474-l79hr_13835a45-f211-4e69-bccd-98ef4e8a5594/kube-rbac-proxy/0.log" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.017974 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.095349 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-utilities\") pod \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.095514 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2cnp\" (UniqueName: \"kubernetes.io/projected/0eee47f6-82ff-4dcd-b69f-1007e97d651d-kube-api-access-q2cnp\") pod \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.095594 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-catalog-content\") pod \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\" (UID: \"0eee47f6-82ff-4dcd-b69f-1007e97d651d\") " Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.096493 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-utilities" (OuterVolumeSpecName: "utilities") pod "0eee47f6-82ff-4dcd-b69f-1007e97d651d" (UID: "0eee47f6-82ff-4dcd-b69f-1007e97d651d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.103986 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eee47f6-82ff-4dcd-b69f-1007e97d651d-kube-api-access-q2cnp" (OuterVolumeSpecName: "kube-api-access-q2cnp") pod "0eee47f6-82ff-4dcd-b69f-1007e97d651d" (UID: "0eee47f6-82ff-4dcd-b69f-1007e97d651d"). InnerVolumeSpecName "kube-api-access-q2cnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.148709 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0eee47f6-82ff-4dcd-b69f-1007e97d651d" (UID: "0eee47f6-82ff-4dcd-b69f-1007e97d651d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.158508 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e506a3-be3c-4213-9923-304162c78082" path="/var/lib/kubelet/pods/a1e506a3-be3c-4213-9923-304162c78082/volumes" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.198601 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.198636 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0eee47f6-82ff-4dcd-b69f-1007e97d651d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.198646 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2cnp\" (UniqueName: \"kubernetes.io/projected/0eee47f6-82ff-4dcd-b69f-1007e97d651d-kube-api-access-q2cnp\") on node \"crc\" DevicePath \"\"" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.233647 4710 generic.go:334] "Generic (PLEG): container finished" podID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerID="cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31" exitCode=0 Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.233717 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lth46" event={"ID":"0eee47f6-82ff-4dcd-b69f-1007e97d651d","Type":"ContainerDied","Data":"cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31"} Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.233749 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lth46" event={"ID":"0eee47f6-82ff-4dcd-b69f-1007e97d651d","Type":"ContainerDied","Data":"a270e7db4016ed737fce0d35d8276eee999217ad55decb654d7f7e2d76013943"} Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.233808 4710 scope.go:117] "RemoveContainer" containerID="cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.233933 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lth46" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.267543 4710 scope.go:117] "RemoveContainer" containerID="9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.283584 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lth46"] Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.294426 4710 scope.go:117] "RemoveContainer" containerID="94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.312195 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lth46"] Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.316440 4710 scope.go:117] "RemoveContainer" containerID="cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31" Nov 28 17:57:29 crc kubenswrapper[4710]: E1128 17:57:29.317033 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31\": container with ID starting with cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31 not found: ID does not exist" containerID="cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.317066 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31"} err="failed to get container status \"cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31\": rpc error: code = NotFound desc = could not find container \"cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31\": container with ID starting with cfbda35f81d770a435347359497faec7a6c9f1bb4ac03cc43ce632ae3698ff31 not found: ID does not exist" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.317088 4710 scope.go:117] "RemoveContainer" containerID="9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a" Nov 28 17:57:29 crc kubenswrapper[4710]: E1128 17:57:29.317364 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a\": container with ID starting with 9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a not found: ID does not exist" containerID="9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.317383 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a"} err="failed to get container status \"9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a\": rpc error: code = NotFound desc = could not find container \"9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a\": container with ID starting with 9de54e1ca0e27e7d8105aa0c05bf87c9dab872f9f5d3e7a2c3d7e8cf7d5db04a not found: ID does not exist" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.317397 4710 scope.go:117] "RemoveContainer" containerID="94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2" Nov 28 17:57:29 crc kubenswrapper[4710]: E1128 17:57:29.317869 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2\": container with ID starting with 94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2 not found: ID does not exist" containerID="94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2" Nov 28 17:57:29 crc kubenswrapper[4710]: I1128 17:57:29.317920 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2"} err="failed to get container status \"94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2\": rpc error: code = NotFound desc = could not find container \"94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2\": container with ID starting with 94ee0d6374c730814610113f2cdb179b00db97e9f992b8c739697fa9e25552b2 not found: ID does not exist" Nov 28 17:57:31 crc kubenswrapper[4710]: I1128 17:57:31.159094 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" path="/var/lib/kubelet/pods/0eee47f6-82ff-4dcd-b69f-1007e97d651d/volumes" Nov 28 17:57:43 crc kubenswrapper[4710]: I1128 17:57:43.635820 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-ff9846bd-rrn26_a83b9835-d280-4376-9a2d-b75efd5516d1/cluster-logging-operator/0.log" Nov 28 17:57:43 crc kubenswrapper[4710]: I1128 17:57:43.829079 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-rnjct_72491cd2-2224-4420-a937-a15f5f22e035/collector/0.log" Nov 28 17:57:43 crc kubenswrapper[4710]: I1128 17:57:43.855542 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_1c46c2c6-fb09-4289-a38d-ce46f239b830/loki-compactor/0.log" Nov 28 17:57:43 crc kubenswrapper[4710]: I1128 17:57:43.995967 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-76cc67bf56-2nm9w_68c1e53e-646a-4985-b4a8-d61a238cbad2/loki-distributor/0.log" Nov 28 17:57:44 crc kubenswrapper[4710]: I1128 17:57:44.031866 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-bb554467b-6hp6p_6cab4590-1fa6-4fe0-ae00-2c70b93830bd/gateway/0.log" Nov 28 17:57:44 crc kubenswrapper[4710]: I1128 17:57:44.064334 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-bb554467b-6hp6p_6cab4590-1fa6-4fe0-ae00-2c70b93830bd/opa/0.log" Nov 28 17:57:44 crc kubenswrapper[4710]: I1128 17:57:44.193941 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-bb554467b-j7bcn_f5157b75-08ae-416f-a4d7-1e1f7cb085c4/gateway/0.log" Nov 28 17:57:44 crc kubenswrapper[4710]: I1128 17:57:44.268499 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-bb554467b-j7bcn_f5157b75-08ae-416f-a4d7-1e1f7cb085c4/opa/0.log" Nov 28 17:57:44 crc kubenswrapper[4710]: I1128 17:57:44.378234 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_0d4ff66e-4d49-4dc9-9ef9-ae4701c5ff2d/loki-index-gateway/0.log" Nov 28 17:57:44 crc kubenswrapper[4710]: I1128 17:57:44.494547 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_0b308845-0e6e-41e0-9ca9-f04b09a31211/loki-ingester/0.log" Nov 28 17:57:44 crc kubenswrapper[4710]: I1128 17:57:44.567048 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-5895d59bb8-h8dlt_57ccef3e-3095-486c-a76f-733a130bf17d/loki-querier/0.log" Nov 28 17:57:44 crc kubenswrapper[4710]: I1128 17:57:44.696464 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-84558f7c9f-vrpfr_56b6c331-58e9-4845-ba94-c16852ca78aa/loki-query-frontend/0.log" Nov 28 17:57:59 crc kubenswrapper[4710]: I1128 17:57:59.708114 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jxlkv_c96510f3-24a2-4722-83c2-a1d39168687b/kube-rbac-proxy/0.log" Nov 28 17:57:59 crc kubenswrapper[4710]: I1128 17:57:59.816523 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-jxlkv_c96510f3-24a2-4722-83c2-a1d39168687b/controller/0.log" Nov 28 17:57:59 crc kubenswrapper[4710]: I1128 17:57:59.995840 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-frr-files/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.171791 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-frr-files/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.173483 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-reloader/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.218747 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-metrics/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.242257 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-reloader/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.376451 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-frr-files/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.392629 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-reloader/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.408962 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-metrics/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.467067 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-metrics/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.630835 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-reloader/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.636950 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-frr-files/0.log" Nov 28 17:58:00 crc kubenswrapper[4710]: I1128 17:58:00.646291 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/cp-metrics/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:00.705130 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/controller/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:00.830128 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/frr-metrics/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:00.871090 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/kube-rbac-proxy-frr/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:00.883601 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/kube-rbac-proxy/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:01.026048 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/reloader/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:01.302787 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-pj7zp_499217b3-5eff-47d2-ba82-b340a1fa5149/frr-k8s-webhook-server/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:01.522641 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76d88688fb-s4d79_a01a3bc0-24e8-423f-87c8-32a5cca2be0a/manager/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:01.645255 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6ff9c476c7-v8zvk_0c20cdd4-c8d5-4bfc-ba23-8cc4b544b27e/webhook-server/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:01.817502 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kqv5c_b1655d12-6e92-47ad-b93b-f664ec03d1d0/kube-rbac-proxy/0.log" Nov 28 17:58:01 crc kubenswrapper[4710]: I1128 17:58:01.955952 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-7t69j_05aaf633-4b72-414c-bed7-072766131fb5/frr/0.log" Nov 28 17:58:02 crc kubenswrapper[4710]: I1128 17:58:02.248693 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kqv5c_b1655d12-6e92-47ad-b93b-f664ec03d1d0/speaker/0.log" Nov 28 17:58:13 crc kubenswrapper[4710]: I1128 17:58:13.344406 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:58:13 crc kubenswrapper[4710]: I1128 17:58:13.344988 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:58:16 crc kubenswrapper[4710]: I1128 17:58:16.798087 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda/util/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.002141 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda/util/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.017639 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda/pull/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.017954 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda/pull/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.220354 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda/extract/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.255506 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda/pull/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.275562 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4529ed37fc81381df2b45ea09e6f1b4af8d1558d603912431befd8aeb869mhs_dc4287a5-9f7c-4c3e-b084-f45fd0d4ddda/util/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.435653 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2_776c25fb-769e-45f1-bbdd-1ef457e29908/util/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.614923 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2_776c25fb-769e-45f1-bbdd-1ef457e29908/pull/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.645254 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2_776c25fb-769e-45f1-bbdd-1ef457e29908/util/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.650918 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2_776c25fb-769e-45f1-bbdd-1ef457e29908/pull/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.794719 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2_776c25fb-769e-45f1-bbdd-1ef457e29908/pull/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.801893 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2_776c25fb-769e-45f1-bbdd-1ef457e29908/util/0.log" Nov 28 17:58:17 crc kubenswrapper[4710]: I1128 17:58:17.848162 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fnl8r2_776c25fb-769e-45f1-bbdd-1ef457e29908/extract/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.003191 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_fbd014d4-ebd1-4399-8fe0-82dea587a945/util/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.221594 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_fbd014d4-ebd1-4399-8fe0-82dea587a945/pull/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.221874 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_fbd014d4-ebd1-4399-8fe0-82dea587a945/pull/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.241182 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_fbd014d4-ebd1-4399-8fe0-82dea587a945/util/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.353513 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_fbd014d4-ebd1-4399-8fe0-82dea587a945/util/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.397989 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_fbd014d4-ebd1-4399-8fe0-82dea587a945/extract/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.406902 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a8a03f72555e3294619fd3c0a789fa82d1f6921a8cf9935ed9b211463f96vfj_fbd014d4-ebd1-4399-8fe0-82dea587a945/pull/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.548373 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2_39e81a6e-82aa-4fbe-9e06-4854b233df2e/util/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.712526 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2_39e81a6e-82aa-4fbe-9e06-4854b233df2e/util/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.738515 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2_39e81a6e-82aa-4fbe-9e06-4854b233df2e/pull/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.743036 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2_39e81a6e-82aa-4fbe-9e06-4854b233df2e/pull/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.952179 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2_39e81a6e-82aa-4fbe-9e06-4854b233df2e/pull/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.961008 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2_39e81a6e-82aa-4fbe-9e06-4854b233df2e/util/0.log" Nov 28 17:58:18 crc kubenswrapper[4710]: I1128 17:58:18.974413 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83qltw2_39e81a6e-82aa-4fbe-9e06-4854b233df2e/extract/0.log" Nov 28 17:58:19 crc kubenswrapper[4710]: I1128 17:58:19.125146 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sknw8_852a614a-5a2b-4e2b-8946-13ad235093fc/extract-utilities/0.log" Nov 28 17:58:19 crc kubenswrapper[4710]: I1128 17:58:19.366361 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sknw8_852a614a-5a2b-4e2b-8946-13ad235093fc/extract-content/0.log" Nov 28 17:58:19 crc kubenswrapper[4710]: I1128 17:58:19.371980 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sknw8_852a614a-5a2b-4e2b-8946-13ad235093fc/extract-utilities/0.log" Nov 28 17:58:19 crc kubenswrapper[4710]: I1128 17:58:19.394048 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sknw8_852a614a-5a2b-4e2b-8946-13ad235093fc/extract-content/0.log" Nov 28 17:58:19 crc kubenswrapper[4710]: I1128 17:58:19.713099 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sknw8_852a614a-5a2b-4e2b-8946-13ad235093fc/extract-utilities/0.log" Nov 28 17:58:19 crc kubenswrapper[4710]: I1128 17:58:19.765629 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sknw8_852a614a-5a2b-4e2b-8946-13ad235093fc/extract-content/0.log" Nov 28 17:58:19 crc kubenswrapper[4710]: I1128 17:58:19.916390 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2cbz_bd5feca2-f8e0-42d6-b11b-38a186ed4044/extract-utilities/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.080237 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-sknw8_852a614a-5a2b-4e2b-8946-13ad235093fc/registry-server/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.110688 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2cbz_bd5feca2-f8e0-42d6-b11b-38a186ed4044/extract-content/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.136956 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2cbz_bd5feca2-f8e0-42d6-b11b-38a186ed4044/extract-utilities/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.155959 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2cbz_bd5feca2-f8e0-42d6-b11b-38a186ed4044/extract-content/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.315255 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2cbz_bd5feca2-f8e0-42d6-b11b-38a186ed4044/extract-utilities/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.336627 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2cbz_bd5feca2-f8e0-42d6-b11b-38a186ed4044/extract-content/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.357427 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-4w9jc_b297151b-94bd-4ed5-b889-511fc92fa343/marketplace-operator/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.548182 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5l7l6_41bc6f92-7755-4ed7-94ab-b21b82284a9f/extract-utilities/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.670725 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-x2cbz_bd5feca2-f8e0-42d6-b11b-38a186ed4044/registry-server/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.752750 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5l7l6_41bc6f92-7755-4ed7-94ab-b21b82284a9f/extract-content/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.801465 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5l7l6_41bc6f92-7755-4ed7-94ab-b21b82284a9f/extract-utilities/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.804533 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5l7l6_41bc6f92-7755-4ed7-94ab-b21b82284a9f/extract-content/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.946564 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5l7l6_41bc6f92-7755-4ed7-94ab-b21b82284a9f/extract-content/0.log" Nov 28 17:58:20 crc kubenswrapper[4710]: I1128 17:58:20.975404 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5l7l6_41bc6f92-7755-4ed7-94ab-b21b82284a9f/extract-utilities/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.031727 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d9z7q_aeef35e5-e5cb-4fb3-af00-f5adca01d8e6/extract-utilities/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.100696 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5l7l6_41bc6f92-7755-4ed7-94ab-b21b82284a9f/registry-server/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.209467 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d9z7q_aeef35e5-e5cb-4fb3-af00-f5adca01d8e6/extract-content/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.222220 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d9z7q_aeef35e5-e5cb-4fb3-af00-f5adca01d8e6/extract-utilities/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.228547 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d9z7q_aeef35e5-e5cb-4fb3-af00-f5adca01d8e6/extract-content/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.388392 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d9z7q_aeef35e5-e5cb-4fb3-af00-f5adca01d8e6/extract-content/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.489127 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d9z7q_aeef35e5-e5cb-4fb3-af00-f5adca01d8e6/registry-server/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.529097 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dq7q7_6fbcd726-3ba8-41eb-9b6c-9648483ec935/extract-utilities/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.529115 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-d9z7q_aeef35e5-e5cb-4fb3-af00-f5adca01d8e6/extract-utilities/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.686156 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dq7q7_6fbcd726-3ba8-41eb-9b6c-9648483ec935/extract-utilities/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.704012 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dq7q7_6fbcd726-3ba8-41eb-9b6c-9648483ec935/extract-content/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.709427 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dq7q7_6fbcd726-3ba8-41eb-9b6c-9648483ec935/extract-content/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.876528 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dq7q7_6fbcd726-3ba8-41eb-9b6c-9648483ec935/extract-content/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.881291 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dq7q7_6fbcd726-3ba8-41eb-9b6c-9648483ec935/extract-utilities/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.929728 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gj4kl_0b982e59-f24b-48b7-b0cf-cd196c35c646/extract-utilities/0.log" Nov 28 17:58:21 crc kubenswrapper[4710]: I1128 17:58:21.982406 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dq7q7_6fbcd726-3ba8-41eb-9b6c-9648483ec935/registry-server/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.131074 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gj4kl_0b982e59-f24b-48b7-b0cf-cd196c35c646/extract-utilities/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.171284 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gj4kl_0b982e59-f24b-48b7-b0cf-cd196c35c646/extract-content/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.198589 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gj4kl_0b982e59-f24b-48b7-b0cf-cd196c35c646/extract-content/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.381515 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gj4kl_0b982e59-f24b-48b7-b0cf-cd196c35c646/extract-utilities/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.384143 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gj4kl_0b982e59-f24b-48b7-b0cf-cd196c35c646/extract-content/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.448486 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gmx26_1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec/extract-utilities/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.472718 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gj4kl_0b982e59-f24b-48b7-b0cf-cd196c35c646/registry-server/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.625902 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gmx26_1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec/extract-utilities/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.644651 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gmx26_1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec/extract-content/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.647820 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gmx26_1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec/extract-content/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.804141 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gmx26_1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec/extract-utilities/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.807531 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gmx26_1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec/extract-content/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.834604 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k8rlt_31d7e40a-6e97-4337-be6d-4f93a852e342/extract-utilities/0.log" Nov 28 17:58:22 crc kubenswrapper[4710]: I1128 17:58:22.901094 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-gmx26_1acb5cf0-776d-4d4c-a4d8-fa4adc9196ec/registry-server/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.094719 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k8rlt_31d7e40a-6e97-4337-be6d-4f93a852e342/extract-content/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.095804 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k8rlt_31d7e40a-6e97-4337-be6d-4f93a852e342/extract-utilities/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.279986 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k8rlt_31d7e40a-6e97-4337-be6d-4f93a852e342/extract-content/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.465048 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-khfkl_95fd8509-97bd-4d02-87d5-3593b426ef44/extract-utilities/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.483076 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k8rlt_31d7e40a-6e97-4337-be6d-4f93a852e342/registry-server/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.493581 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k8rlt_31d7e40a-6e97-4337-be6d-4f93a852e342/extract-utilities/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.508694 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-k8rlt_31d7e40a-6e97-4337-be6d-4f93a852e342/extract-content/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.656515 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-khfkl_95fd8509-97bd-4d02-87d5-3593b426ef44/extract-utilities/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.699175 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-khfkl_95fd8509-97bd-4d02-87d5-3593b426ef44/extract-content/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.699562 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-khfkl_95fd8509-97bd-4d02-87d5-3593b426ef44/extract-content/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.889566 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-khfkl_95fd8509-97bd-4d02-87d5-3593b426ef44/extract-content/0.log" Nov 28 17:58:23 crc kubenswrapper[4710]: I1128 17:58:23.940005 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-khfkl_95fd8509-97bd-4d02-87d5-3593b426ef44/extract-utilities/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.013007 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-khfkl_95fd8509-97bd-4d02-87d5-3593b426ef44/registry-server/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.037394 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m5srj_32d30eea-067e-4b8c-8bd4-a6dd02440a71/extract-utilities/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.121503 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m5srj_32d30eea-067e-4b8c-8bd4-a6dd02440a71/extract-content/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.143967 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m5srj_32d30eea-067e-4b8c-8bd4-a6dd02440a71/extract-utilities/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.203086 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m5srj_32d30eea-067e-4b8c-8bd4-a6dd02440a71/extract-content/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.381826 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m5srj_32d30eea-067e-4b8c-8bd4-a6dd02440a71/extract-utilities/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.436956 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m5srj_32d30eea-067e-4b8c-8bd4-a6dd02440a71/extract-content/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.491957 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npsxw_4de54a7a-65ab-4560-a62e-0fb531a0ca92/extract-utilities/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.519324 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-m5srj_32d30eea-067e-4b8c-8bd4-a6dd02440a71/registry-server/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.634312 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npsxw_4de54a7a-65ab-4560-a62e-0fb531a0ca92/extract-utilities/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.658688 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npsxw_4de54a7a-65ab-4560-a62e-0fb531a0ca92/extract-content/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.662443 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npsxw_4de54a7a-65ab-4560-a62e-0fb531a0ca92/extract-content/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.848383 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npsxw_4de54a7a-65ab-4560-a62e-0fb531a0ca92/extract-content/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.870893 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npsxw_4de54a7a-65ab-4560-a62e-0fb531a0ca92/extract-utilities/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.879377 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pllxk_dea56f67-6506-451e-965f-3ef66a34d8e7/extract-utilities/0.log" Nov 28 17:58:24 crc kubenswrapper[4710]: I1128 17:58:24.929497 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-npsxw_4de54a7a-65ab-4560-a62e-0fb531a0ca92/registry-server/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.068819 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pllxk_dea56f67-6506-451e-965f-3ef66a34d8e7/extract-utilities/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.104161 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pllxk_dea56f67-6506-451e-965f-3ef66a34d8e7/extract-content/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.116431 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pllxk_dea56f67-6506-451e-965f-3ef66a34d8e7/extract-content/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.244985 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pllxk_dea56f67-6506-451e-965f-3ef66a34d8e7/extract-utilities/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.303439 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pllxk_dea56f67-6506-451e-965f-3ef66a34d8e7/extract-content/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.328494 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sl4b6_78802154-2da1-4554-92d3-20994dfac727/extract-utilities/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.407708 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pllxk_dea56f67-6506-451e-965f-3ef66a34d8e7/registry-server/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.482445 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sl4b6_78802154-2da1-4554-92d3-20994dfac727/extract-utilities/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.498737 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sl4b6_78802154-2da1-4554-92d3-20994dfac727/extract-content/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.513315 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sl4b6_78802154-2da1-4554-92d3-20994dfac727/extract-content/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.728157 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sl4b6_78802154-2da1-4554-92d3-20994dfac727/extract-utilities/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.743372 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sl4b6_78802154-2da1-4554-92d3-20994dfac727/extract-content/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.839353 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tdbpv_aff5b3d8-f488-487f-9407-07c88e139d95/extract-utilities/0.log" Nov 28 17:58:25 crc kubenswrapper[4710]: I1128 17:58:25.903566 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sl4b6_78802154-2da1-4554-92d3-20994dfac727/registry-server/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.024129 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tdbpv_aff5b3d8-f488-487f-9407-07c88e139d95/extract-content/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.047310 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tdbpv_aff5b3d8-f488-487f-9407-07c88e139d95/extract-content/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.055778 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tdbpv_aff5b3d8-f488-487f-9407-07c88e139d95/extract-utilities/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.212800 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tdbpv_aff5b3d8-f488-487f-9407-07c88e139d95/extract-utilities/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.223007 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tdbpv_aff5b3d8-f488-487f-9407-07c88e139d95/extract-content/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.258312 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-tdbpv_aff5b3d8-f488-487f-9407-07c88e139d95/registry-server/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.316560 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdv6c_d8d16a8e-94b3-4552-873c-a100d1fa8bc6/extract-utilities/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.500349 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdv6c_d8d16a8e-94b3-4552-873c-a100d1fa8bc6/extract-utilities/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.513656 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdv6c_d8d16a8e-94b3-4552-873c-a100d1fa8bc6/extract-content/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.644192 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdv6c_d8d16a8e-94b3-4552-873c-a100d1fa8bc6/extract-content/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.819082 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdv6c_d8d16a8e-94b3-4552-873c-a100d1fa8bc6/extract-utilities/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.868103 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdv6c_d8d16a8e-94b3-4552-873c-a100d1fa8bc6/extract-content/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.908078 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-vdv6c_d8d16a8e-94b3-4552-873c-a100d1fa8bc6/registry-server/0.log" Nov 28 17:58:26 crc kubenswrapper[4710]: I1128 17:58:26.916965 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdmpr_41e6bdd6-6ee2-4793-b202-d0297c3843f1/extract-utilities/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.051265 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdmpr_41e6bdd6-6ee2-4793-b202-d0297c3843f1/extract-utilities/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.068397 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdmpr_41e6bdd6-6ee2-4793-b202-d0297c3843f1/extract-content/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.113187 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdmpr_41e6bdd6-6ee2-4793-b202-d0297c3843f1/extract-content/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.272174 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdmpr_41e6bdd6-6ee2-4793-b202-d0297c3843f1/extract-content/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.280066 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdmpr_41e6bdd6-6ee2-4793-b202-d0297c3843f1/extract-utilities/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.325253 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-wdmpr_41e6bdd6-6ee2-4793-b202-d0297c3843f1/registry-server/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.352494 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8fvm_69fc5b9f-c1de-4e0f-9f04-1a9db62f2814/extract-utilities/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.557448 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8fvm_69fc5b9f-c1de-4e0f-9f04-1a9db62f2814/extract-content/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.565683 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8fvm_69fc5b9f-c1de-4e0f-9f04-1a9db62f2814/extract-content/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.571375 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8fvm_69fc5b9f-c1de-4e0f-9f04-1a9db62f2814/extract-utilities/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.741932 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8fvm_69fc5b9f-c1de-4e0f-9f04-1a9db62f2814/extract-content/0.log" Nov 28 17:58:27 crc kubenswrapper[4710]: I1128 17:58:27.754147 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8fvm_69fc5b9f-c1de-4e0f-9f04-1a9db62f2814/extract-utilities/0.log" Nov 28 17:58:28 crc kubenswrapper[4710]: I1128 17:58:28.530001 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z8fvm_69fc5b9f-c1de-4e0f-9f04-1a9db62f2814/registry-server/0.log" Nov 28 17:58:43 crc kubenswrapper[4710]: I1128 17:58:43.343806 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:58:43 crc kubenswrapper[4710]: I1128 17:58:43.344233 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:58:55 crc kubenswrapper[4710]: I1128 17:58:55.944579 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-867dcf9474-l79hr_13835a45-f211-4e69-bccd-98ef4e8a5594/kube-rbac-proxy/0.log" Nov 28 17:58:56 crc kubenswrapper[4710]: I1128 17:58:56.042877 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-867dcf9474-l79hr_13835a45-f211-4e69-bccd-98ef4e8a5594/manager/0.log" Nov 28 17:59:13 crc kubenswrapper[4710]: I1128 17:59:13.345338 4710 patch_prober.go:28] interesting pod/machine-config-daemon-9mscc container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 28 17:59:13 crc kubenswrapper[4710]: I1128 17:59:13.345922 4710 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 28 17:59:13 crc kubenswrapper[4710]: I1128 17:59:13.345963 4710 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" Nov 28 17:59:13 crc kubenswrapper[4710]: I1128 17:59:13.346519 4710 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db"} pod="openshift-machine-config-operator/machine-config-daemon-9mscc" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 28 17:59:13 crc kubenswrapper[4710]: I1128 17:59:13.346603 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerName="machine-config-daemon" containerID="cri-o://637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" gracePeriod=600 Nov 28 17:59:13 crc kubenswrapper[4710]: E1128 17:59:13.468309 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:59:14 crc kubenswrapper[4710]: I1128 17:59:14.451931 4710 generic.go:334] "Generic (PLEG): container finished" podID="4ca87069-1d78-4e20-ba15-f37acec7135b" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" exitCode=0 Nov 28 17:59:14 crc kubenswrapper[4710]: I1128 17:59:14.452037 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerDied","Data":"637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db"} Nov 28 17:59:14 crc kubenswrapper[4710]: I1128 17:59:14.452377 4710 scope.go:117] "RemoveContainer" containerID="9faa162fa2d0e90421242c87e8957b2d01034457183612706fea687b37d5e765" Nov 28 17:59:14 crc kubenswrapper[4710]: I1128 17:59:14.453113 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 17:59:14 crc kubenswrapper[4710]: E1128 17:59:14.453669 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:59:29 crc kubenswrapper[4710]: I1128 17:59:29.144246 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 17:59:29 crc kubenswrapper[4710]: E1128 17:59:29.144961 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:59:42 crc kubenswrapper[4710]: I1128 17:59:42.142898 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 17:59:42 crc kubenswrapper[4710]: E1128 17:59:42.144041 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 17:59:55 crc kubenswrapper[4710]: I1128 17:59:55.141546 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 17:59:55 crc kubenswrapper[4710]: E1128 17:59:55.142276 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.237847 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7"] Nov 28 18:00:00 crc kubenswrapper[4710]: E1128 18:00:00.238993 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="extract-utilities" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.239014 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="extract-utilities" Nov 28 18:00:00 crc kubenswrapper[4710]: E1128 18:00:00.239036 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="extract-content" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.239044 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="extract-content" Nov 28 18:00:00 crc kubenswrapper[4710]: E1128 18:00:00.239063 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.239073 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[4710]: E1128 18:00:00.239084 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerName="extract-utilities" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.239091 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerName="extract-utilities" Nov 28 18:00:00 crc kubenswrapper[4710]: E1128 18:00:00.239100 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.239107 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[4710]: E1128 18:00:00.239137 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerName="extract-content" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.239145 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerName="extract-content" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.239408 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e506a3-be3c-4213-9923-304162c78082" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.239437 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eee47f6-82ff-4dcd-b69f-1007e97d651d" containerName="registry-server" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.240221 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7"] Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.240307 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.265494 4710 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.265920 4710 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.458263 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbllm\" (UniqueName: \"kubernetes.io/projected/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-kube-api-access-dbllm\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.458690 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-config-volume\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.458899 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-secret-volume\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.563032 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-secret-volume\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.563315 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbllm\" (UniqueName: \"kubernetes.io/projected/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-kube-api-access-dbllm\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.563404 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-config-volume\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.566425 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-config-volume\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.588313 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-secret-volume\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.599303 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbllm\" (UniqueName: \"kubernetes.io/projected/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-kube-api-access-dbllm\") pod \"collect-profiles-29405880-qhxw7\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:00 crc kubenswrapper[4710]: I1128 18:00:00.886392 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:01 crc kubenswrapper[4710]: I1128 18:00:01.519229 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7"] Nov 28 18:00:01 crc kubenswrapper[4710]: W1128 18:00:01.527410 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8d8be1_9edd_4c2d_973c_ccc4ca23efb8.slice/crio-ea7fcddb5b9b61ac9fdb52b0b3004945fc812d0ba664dbaa47814d14cb04f376 WatchSource:0}: Error finding container ea7fcddb5b9b61ac9fdb52b0b3004945fc812d0ba664dbaa47814d14cb04f376: Status 404 returned error can't find the container with id ea7fcddb5b9b61ac9fdb52b0b3004945fc812d0ba664dbaa47814d14cb04f376 Nov 28 18:00:02 crc kubenswrapper[4710]: I1128 18:00:02.056072 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" event={"ID":"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8","Type":"ContainerStarted","Data":"0cd2b1a465b7701b7e202d6a7a3f714ac91f89713ea6ddf61c4ac44cc5d1dfd1"} Nov 28 18:00:02 crc kubenswrapper[4710]: I1128 18:00:02.056310 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" event={"ID":"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8","Type":"ContainerStarted","Data":"ea7fcddb5b9b61ac9fdb52b0b3004945fc812d0ba664dbaa47814d14cb04f376"} Nov 28 18:00:02 crc kubenswrapper[4710]: I1128 18:00:02.083281 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" podStartSLOduration=2.083242116 podStartE2EDuration="2.083242116s" podCreationTimestamp="2025-11-28 18:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-28 18:00:02.071951318 +0000 UTC m=+3691.330251373" watchObservedRunningTime="2025-11-28 18:00:02.083242116 +0000 UTC m=+3691.341542161" Nov 28 18:00:03 crc kubenswrapper[4710]: I1128 18:00:03.074337 4710 generic.go:334] "Generic (PLEG): container finished" podID="ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8" containerID="0cd2b1a465b7701b7e202d6a7a3f714ac91f89713ea6ddf61c4ac44cc5d1dfd1" exitCode=0 Nov 28 18:00:03 crc kubenswrapper[4710]: I1128 18:00:03.074729 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" event={"ID":"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8","Type":"ContainerDied","Data":"0cd2b1a465b7701b7e202d6a7a3f714ac91f89713ea6ddf61c4ac44cc5d1dfd1"} Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.561438 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.663572 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-config-volume\") pod \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.663679 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbllm\" (UniqueName: \"kubernetes.io/projected/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-kube-api-access-dbllm\") pod \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.663711 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-secret-volume\") pod \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\" (UID: \"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8\") " Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.664350 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-config-volume" (OuterVolumeSpecName: "config-volume") pod "ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8" (UID: "ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.668975 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8" (UID: "ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.670564 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-kube-api-access-dbllm" (OuterVolumeSpecName: "kube-api-access-dbllm") pod "ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8" (UID: "ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8"). InnerVolumeSpecName "kube-api-access-dbllm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.766631 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbllm\" (UniqueName: \"kubernetes.io/projected/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-kube-api-access-dbllm\") on node \"crc\" DevicePath \"\"" Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.766671 4710 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:00:04 crc kubenswrapper[4710]: I1128 18:00:04.766687 4710 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8-config-volume\") on node \"crc\" DevicePath \"\"" Nov 28 18:00:05 crc kubenswrapper[4710]: I1128 18:00:05.107434 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" event={"ID":"ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8","Type":"ContainerDied","Data":"ea7fcddb5b9b61ac9fdb52b0b3004945fc812d0ba664dbaa47814d14cb04f376"} Nov 28 18:00:05 crc kubenswrapper[4710]: I1128 18:00:05.107511 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea7fcddb5b9b61ac9fdb52b0b3004945fc812d0ba664dbaa47814d14cb04f376" Nov 28 18:00:05 crc kubenswrapper[4710]: I1128 18:00:05.107588 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29405880-qhxw7" Nov 28 18:00:05 crc kubenswrapper[4710]: I1128 18:00:05.649638 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5"] Nov 28 18:00:05 crc kubenswrapper[4710]: I1128 18:00:05.661505 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29405835-wptj5"] Nov 28 18:00:06 crc kubenswrapper[4710]: I1128 18:00:06.149066 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:00:06 crc kubenswrapper[4710]: E1128 18:00:06.149751 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:00:07 crc kubenswrapper[4710]: I1128 18:00:07.186362 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ad9606e-ed8b-4be2-b066-4b9bc7935a85" path="/var/lib/kubelet/pods/1ad9606e-ed8b-4be2-b066-4b9bc7935a85/volumes" Nov 28 18:00:17 crc kubenswrapper[4710]: I1128 18:00:17.143001 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:00:17 crc kubenswrapper[4710]: E1128 18:00:17.144095 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:00:27 crc kubenswrapper[4710]: I1128 18:00:27.409329 4710 generic.go:334] "Generic (PLEG): container finished" podID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerID="a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919" exitCode=0 Nov 28 18:00:27 crc kubenswrapper[4710]: I1128 18:00:27.409480 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-22qbx/must-gather-bpq96" event={"ID":"10395304-0e2c-4cb0-bfd0-7a850ac729ef","Type":"ContainerDied","Data":"a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919"} Nov 28 18:00:27 crc kubenswrapper[4710]: I1128 18:00:27.411287 4710 scope.go:117] "RemoveContainer" containerID="a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919" Nov 28 18:00:28 crc kubenswrapper[4710]: I1128 18:00:28.107608 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-22qbx_must-gather-bpq96_10395304-0e2c-4cb0-bfd0-7a850ac729ef/gather/0.log" Nov 28 18:00:32 crc kubenswrapper[4710]: I1128 18:00:32.142406 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:00:32 crc kubenswrapper[4710]: E1128 18:00:32.144709 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:00:36 crc kubenswrapper[4710]: I1128 18:00:36.731532 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-22qbx/must-gather-bpq96"] Nov 28 18:00:36 crc kubenswrapper[4710]: I1128 18:00:36.732705 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-22qbx/must-gather-bpq96" podUID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerName="copy" containerID="cri-o://e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f" gracePeriod=2 Nov 28 18:00:36 crc kubenswrapper[4710]: I1128 18:00:36.742555 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-22qbx/must-gather-bpq96"] Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.256931 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-22qbx_must-gather-bpq96_10395304-0e2c-4cb0-bfd0-7a850ac729ef/copy/0.log" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.257897 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.425297 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10395304-0e2c-4cb0-bfd0-7a850ac729ef-must-gather-output\") pod \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\" (UID: \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\") " Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.425358 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnfjj\" (UniqueName: \"kubernetes.io/projected/10395304-0e2c-4cb0-bfd0-7a850ac729ef-kube-api-access-xnfjj\") pod \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\" (UID: \"10395304-0e2c-4cb0-bfd0-7a850ac729ef\") " Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.448673 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10395304-0e2c-4cb0-bfd0-7a850ac729ef-kube-api-access-xnfjj" (OuterVolumeSpecName: "kube-api-access-xnfjj") pod "10395304-0e2c-4cb0-bfd0-7a850ac729ef" (UID: "10395304-0e2c-4cb0-bfd0-7a850ac729ef"). InnerVolumeSpecName "kube-api-access-xnfjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.529957 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnfjj\" (UniqueName: \"kubernetes.io/projected/10395304-0e2c-4cb0-bfd0-7a850ac729ef-kube-api-access-xnfjj\") on node \"crc\" DevicePath \"\"" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.576735 4710 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-22qbx_must-gather-bpq96_10395304-0e2c-4cb0-bfd0-7a850ac729ef/copy/0.log" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.578230 4710 generic.go:334] "Generic (PLEG): container finished" podID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerID="e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f" exitCode=143 Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.578305 4710 scope.go:117] "RemoveContainer" containerID="e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.578464 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-22qbx/must-gather-bpq96" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.604236 4710 scope.go:117] "RemoveContainer" containerID="a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.634547 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10395304-0e2c-4cb0-bfd0-7a850ac729ef-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "10395304-0e2c-4cb0-bfd0-7a850ac729ef" (UID: "10395304-0e2c-4cb0-bfd0-7a850ac729ef"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.635414 4710 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/10395304-0e2c-4cb0-bfd0-7a850ac729ef-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.675247 4710 scope.go:117] "RemoveContainer" containerID="e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f" Nov 28 18:00:37 crc kubenswrapper[4710]: E1128 18:00:37.675863 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f\": container with ID starting with e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f not found: ID does not exist" containerID="e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.675910 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f"} err="failed to get container status \"e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f\": rpc error: code = NotFound desc = could not find container \"e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f\": container with ID starting with e3326f0386676a064e5aafd3f23e6512a39cc637cbfd1a0902568680211f850f not found: ID does not exist" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.675947 4710 scope.go:117] "RemoveContainer" containerID="a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919" Nov 28 18:00:37 crc kubenswrapper[4710]: E1128 18:00:37.676218 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919\": container with ID starting with a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919 not found: ID does not exist" containerID="a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919" Nov 28 18:00:37 crc kubenswrapper[4710]: I1128 18:00:37.676247 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919"} err="failed to get container status \"a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919\": rpc error: code = NotFound desc = could not find container \"a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919\": container with ID starting with a5cdfa3503a58dcd44075b50a523973aec8878e749ee7b7b9f6aabff584ab919 not found: ID does not exist" Nov 28 18:00:39 crc kubenswrapper[4710]: I1128 18:00:39.153859 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" path="/var/lib/kubelet/pods/10395304-0e2c-4cb0-bfd0-7a850ac729ef/volumes" Nov 28 18:00:43 crc kubenswrapper[4710]: I1128 18:00:43.854243 4710 scope.go:117] "RemoveContainer" containerID="9eead0610ace5731b807fc23aaf441d559113844a215e40c6a8f1a18fb4b157f" Nov 28 18:00:43 crc kubenswrapper[4710]: I1128 18:00:43.907826 4710 scope.go:117] "RemoveContainer" containerID="7e50b18ae0017b57b98a4e408ea6d5232230e45f2f30641be04326de16031872" Nov 28 18:00:46 crc kubenswrapper[4710]: I1128 18:00:46.142447 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:00:46 crc kubenswrapper[4710]: E1128 18:00:46.143294 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:00:58 crc kubenswrapper[4710]: I1128 18:00:58.142902 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:00:58 crc kubenswrapper[4710]: E1128 18:00:58.143975 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.157377 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29405881-zftgc"] Nov 28 18:01:00 crc kubenswrapper[4710]: E1128 18:01:00.158166 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerName="gather" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.158182 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerName="gather" Nov 28 18:01:00 crc kubenswrapper[4710]: E1128 18:01:00.158218 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerName="copy" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.158225 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerName="copy" Nov 28 18:01:00 crc kubenswrapper[4710]: E1128 18:01:00.158250 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8" containerName="collect-profiles" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.158256 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8" containerName="collect-profiles" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.158457 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerName="copy" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.158470 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8d8be1-9edd-4c2d-973c-ccc4ca23efb8" containerName="collect-profiles" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.158481 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="10395304-0e2c-4cb0-bfd0-7a850ac729ef" containerName="gather" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.160050 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.177522 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405881-zftgc"] Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.192400 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-combined-ca-bundle\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.192455 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-config-data\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.192538 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-fernet-keys\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.192597 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p55q5\" (UniqueName: \"kubernetes.io/projected/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-kube-api-access-p55q5\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.294703 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-combined-ca-bundle\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.294765 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-config-data\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.294835 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-fernet-keys\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.294889 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p55q5\" (UniqueName: \"kubernetes.io/projected/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-kube-api-access-p55q5\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.301725 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-fernet-keys\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.301722 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-config-data\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.305662 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-combined-ca-bundle\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.316704 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p55q5\" (UniqueName: \"kubernetes.io/projected/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-kube-api-access-p55q5\") pod \"keystone-cron-29405881-zftgc\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:00 crc kubenswrapper[4710]: I1128 18:01:00.508739 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:01 crc kubenswrapper[4710]: W1128 18:01:01.056640 4710 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2ebc9a7_5131_4127_9c75_aad0dc2e874a.slice/crio-2336778c3829b786c4865dca4490d402adda9bfb2e7296b5fd0a6e79bdf4f54d WatchSource:0}: Error finding container 2336778c3829b786c4865dca4490d402adda9bfb2e7296b5fd0a6e79bdf4f54d: Status 404 returned error can't find the container with id 2336778c3829b786c4865dca4490d402adda9bfb2e7296b5fd0a6e79bdf4f54d Nov 28 18:01:01 crc kubenswrapper[4710]: I1128 18:01:01.076615 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29405881-zftgc"] Nov 28 18:01:01 crc kubenswrapper[4710]: I1128 18:01:01.852008 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405881-zftgc" event={"ID":"b2ebc9a7-5131-4127-9c75-aad0dc2e874a","Type":"ContainerStarted","Data":"e6dd39e97a5e599440e0d42cf7a58c155ae627424d7755f76971c9c9b77675d2"} Nov 28 18:01:01 crc kubenswrapper[4710]: I1128 18:01:01.852305 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405881-zftgc" event={"ID":"b2ebc9a7-5131-4127-9c75-aad0dc2e874a","Type":"ContainerStarted","Data":"2336778c3829b786c4865dca4490d402adda9bfb2e7296b5fd0a6e79bdf4f54d"} Nov 28 18:01:03 crc kubenswrapper[4710]: I1128 18:01:03.876804 4710 generic.go:334] "Generic (PLEG): container finished" podID="b2ebc9a7-5131-4127-9c75-aad0dc2e874a" containerID="e6dd39e97a5e599440e0d42cf7a58c155ae627424d7755f76971c9c9b77675d2" exitCode=0 Nov 28 18:01:03 crc kubenswrapper[4710]: I1128 18:01:03.877133 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405881-zftgc" event={"ID":"b2ebc9a7-5131-4127-9c75-aad0dc2e874a","Type":"ContainerDied","Data":"e6dd39e97a5e599440e0d42cf7a58c155ae627424d7755f76971c9c9b77675d2"} Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.417039 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.528552 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p55q5\" (UniqueName: \"kubernetes.io/projected/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-kube-api-access-p55q5\") pod \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.528693 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-config-data\") pod \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.528737 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-combined-ca-bundle\") pod \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.528837 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-fernet-keys\") pod \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\" (UID: \"b2ebc9a7-5131-4127-9c75-aad0dc2e874a\") " Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.545178 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-kube-api-access-p55q5" (OuterVolumeSpecName: "kube-api-access-p55q5") pod "b2ebc9a7-5131-4127-9c75-aad0dc2e874a" (UID: "b2ebc9a7-5131-4127-9c75-aad0dc2e874a"). InnerVolumeSpecName "kube-api-access-p55q5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.558655 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b2ebc9a7-5131-4127-9c75-aad0dc2e874a" (UID: "b2ebc9a7-5131-4127-9c75-aad0dc2e874a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.569942 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2ebc9a7-5131-4127-9c75-aad0dc2e874a" (UID: "b2ebc9a7-5131-4127-9c75-aad0dc2e874a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.587844 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-config-data" (OuterVolumeSpecName: "config-data") pod "b2ebc9a7-5131-4127-9c75-aad0dc2e874a" (UID: "b2ebc9a7-5131-4127-9c75-aad0dc2e874a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.631463 4710 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.631492 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p55q5\" (UniqueName: \"kubernetes.io/projected/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-kube-api-access-p55q5\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.631502 4710 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-config-data\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.631511 4710 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2ebc9a7-5131-4127-9c75-aad0dc2e874a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.933294 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29405881-zftgc" event={"ID":"b2ebc9a7-5131-4127-9c75-aad0dc2e874a","Type":"ContainerDied","Data":"2336778c3829b786c4865dca4490d402adda9bfb2e7296b5fd0a6e79bdf4f54d"} Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.933340 4710 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2336778c3829b786c4865dca4490d402adda9bfb2e7296b5fd0a6e79bdf4f54d" Nov 28 18:01:05 crc kubenswrapper[4710]: I1128 18:01:05.933366 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29405881-zftgc" Nov 28 18:01:09 crc kubenswrapper[4710]: I1128 18:01:09.142520 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:01:09 crc kubenswrapper[4710]: E1128 18:01:09.143563 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:01:24 crc kubenswrapper[4710]: I1128 18:01:24.143451 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:01:24 crc kubenswrapper[4710]: E1128 18:01:24.144653 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:01:36 crc kubenswrapper[4710]: I1128 18:01:36.142936 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:01:36 crc kubenswrapper[4710]: E1128 18:01:36.143872 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.239205 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tqlkh"] Nov 28 18:01:37 crc kubenswrapper[4710]: E1128 18:01:37.240263 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ebc9a7-5131-4127-9c75-aad0dc2e874a" containerName="keystone-cron" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.240297 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ebc9a7-5131-4127-9c75-aad0dc2e874a" containerName="keystone-cron" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.240711 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ebc9a7-5131-4127-9c75-aad0dc2e874a" containerName="keystone-cron" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.243113 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.256619 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqlkh"] Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.425687 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-utilities\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.425829 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-catalog-content\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.425904 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbzzk\" (UniqueName: \"kubernetes.io/projected/d615ba96-0f57-4307-9f27-f20d5b04ea2d-kube-api-access-hbzzk\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.528040 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbzzk\" (UniqueName: \"kubernetes.io/projected/d615ba96-0f57-4307-9f27-f20d5b04ea2d-kube-api-access-hbzzk\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.528416 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-utilities\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.528644 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-catalog-content\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.529072 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-utilities\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.529111 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-catalog-content\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.548404 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbzzk\" (UniqueName: \"kubernetes.io/projected/d615ba96-0f57-4307-9f27-f20d5b04ea2d-kube-api-access-hbzzk\") pod \"redhat-marketplace-tqlkh\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:37 crc kubenswrapper[4710]: I1128 18:01:37.570581 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:38 crc kubenswrapper[4710]: I1128 18:01:38.082479 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqlkh"] Nov 28 18:01:38 crc kubenswrapper[4710]: I1128 18:01:38.372634 4710 generic.go:334] "Generic (PLEG): container finished" podID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerID="b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf" exitCode=0 Nov 28 18:01:38 crc kubenswrapper[4710]: I1128 18:01:38.372686 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqlkh" event={"ID":"d615ba96-0f57-4307-9f27-f20d5b04ea2d","Type":"ContainerDied","Data":"b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf"} Nov 28 18:01:38 crc kubenswrapper[4710]: I1128 18:01:38.372993 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqlkh" event={"ID":"d615ba96-0f57-4307-9f27-f20d5b04ea2d","Type":"ContainerStarted","Data":"19fd1b84366882e26cf2f6849d1b77976072a7c63f0bebadcc42ee72bf87706f"} Nov 28 18:01:40 crc kubenswrapper[4710]: I1128 18:01:40.395022 4710 generic.go:334] "Generic (PLEG): container finished" podID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerID="dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572" exitCode=0 Nov 28 18:01:40 crc kubenswrapper[4710]: I1128 18:01:40.395469 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqlkh" event={"ID":"d615ba96-0f57-4307-9f27-f20d5b04ea2d","Type":"ContainerDied","Data":"dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572"} Nov 28 18:01:41 crc kubenswrapper[4710]: I1128 18:01:41.414665 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqlkh" event={"ID":"d615ba96-0f57-4307-9f27-f20d5b04ea2d","Type":"ContainerStarted","Data":"87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909"} Nov 28 18:01:41 crc kubenswrapper[4710]: I1128 18:01:41.442661 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tqlkh" podStartSLOduration=1.957381721 podStartE2EDuration="4.442642346s" podCreationTimestamp="2025-11-28 18:01:37 +0000 UTC" firstStartedPulling="2025-11-28 18:01:38.375898473 +0000 UTC m=+3787.634198518" lastFinishedPulling="2025-11-28 18:01:40.861159058 +0000 UTC m=+3790.119459143" observedRunningTime="2025-11-28 18:01:41.432967563 +0000 UTC m=+3790.691267628" watchObservedRunningTime="2025-11-28 18:01:41.442642346 +0000 UTC m=+3790.700942391" Nov 28 18:01:44 crc kubenswrapper[4710]: I1128 18:01:44.051594 4710 scope.go:117] "RemoveContainer" containerID="375931c32e11f978641e7b2fcb4009eda60e2ab095b565ecf7abac3cd2660f76" Nov 28 18:01:47 crc kubenswrapper[4710]: I1128 18:01:47.571613 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:47 crc kubenswrapper[4710]: I1128 18:01:47.572227 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:47 crc kubenswrapper[4710]: I1128 18:01:47.660471 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:48 crc kubenswrapper[4710]: I1128 18:01:48.552422 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:48 crc kubenswrapper[4710]: I1128 18:01:48.605986 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqlkh"] Nov 28 18:01:49 crc kubenswrapper[4710]: I1128 18:01:49.142241 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:01:49 crc kubenswrapper[4710]: E1128 18:01:49.143139 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:01:50 crc kubenswrapper[4710]: I1128 18:01:50.544094 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tqlkh" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerName="registry-server" containerID="cri-o://87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909" gracePeriod=2 Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.064451 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.070800 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-catalog-content\") pod \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.071034 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbzzk\" (UniqueName: \"kubernetes.io/projected/d615ba96-0f57-4307-9f27-f20d5b04ea2d-kube-api-access-hbzzk\") pod \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.071102 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-utilities\") pod \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\" (UID: \"d615ba96-0f57-4307-9f27-f20d5b04ea2d\") " Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.072106 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-utilities" (OuterVolumeSpecName: "utilities") pod "d615ba96-0f57-4307-9f27-f20d5b04ea2d" (UID: "d615ba96-0f57-4307-9f27-f20d5b04ea2d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.081961 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d615ba96-0f57-4307-9f27-f20d5b04ea2d-kube-api-access-hbzzk" (OuterVolumeSpecName: "kube-api-access-hbzzk") pod "d615ba96-0f57-4307-9f27-f20d5b04ea2d" (UID: "d615ba96-0f57-4307-9f27-f20d5b04ea2d"). InnerVolumeSpecName "kube-api-access-hbzzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.108603 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d615ba96-0f57-4307-9f27-f20d5b04ea2d" (UID: "d615ba96-0f57-4307-9f27-f20d5b04ea2d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.174195 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.174270 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbzzk\" (UniqueName: \"kubernetes.io/projected/d615ba96-0f57-4307-9f27-f20d5b04ea2d-kube-api-access-hbzzk\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.174285 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d615ba96-0f57-4307-9f27-f20d5b04ea2d-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.563137 4710 generic.go:334] "Generic (PLEG): container finished" podID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerID="87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909" exitCode=0 Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.563178 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqlkh" event={"ID":"d615ba96-0f57-4307-9f27-f20d5b04ea2d","Type":"ContainerDied","Data":"87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909"} Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.563445 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tqlkh" event={"ID":"d615ba96-0f57-4307-9f27-f20d5b04ea2d","Type":"ContainerDied","Data":"19fd1b84366882e26cf2f6849d1b77976072a7c63f0bebadcc42ee72bf87706f"} Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.563468 4710 scope.go:117] "RemoveContainer" containerID="87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.563214 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tqlkh" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.599985 4710 scope.go:117] "RemoveContainer" containerID="dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.604647 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqlkh"] Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.620820 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tqlkh"] Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.634334 4710 scope.go:117] "RemoveContainer" containerID="b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.727146 4710 scope.go:117] "RemoveContainer" containerID="87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909" Nov 28 18:01:51 crc kubenswrapper[4710]: E1128 18:01:51.727630 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909\": container with ID starting with 87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909 not found: ID does not exist" containerID="87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.727662 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909"} err="failed to get container status \"87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909\": rpc error: code = NotFound desc = could not find container \"87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909\": container with ID starting with 87f053bc64063a44000ad1e076945f1297eeb46e2785586cfc0445a6b012a909 not found: ID does not exist" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.727682 4710 scope.go:117] "RemoveContainer" containerID="dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572" Nov 28 18:01:51 crc kubenswrapper[4710]: E1128 18:01:51.728110 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572\": container with ID starting with dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572 not found: ID does not exist" containerID="dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.728133 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572"} err="failed to get container status \"dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572\": rpc error: code = NotFound desc = could not find container \"dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572\": container with ID starting with dc2f3444eb37b8026decaa5e4badfefdce476cac284f2bd93b7fbbcf7fa50572 not found: ID does not exist" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.728146 4710 scope.go:117] "RemoveContainer" containerID="b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf" Nov 28 18:01:51 crc kubenswrapper[4710]: E1128 18:01:51.728411 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf\": container with ID starting with b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf not found: ID does not exist" containerID="b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf" Nov 28 18:01:51 crc kubenswrapper[4710]: I1128 18:01:51.728427 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf"} err="failed to get container status \"b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf\": rpc error: code = NotFound desc = could not find container \"b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf\": container with ID starting with b655b39e8aa4d376d246b4a650dd6806ff81579dbef976da7409ce0856274bcf not found: ID does not exist" Nov 28 18:01:53 crc kubenswrapper[4710]: I1128 18:01:53.157673 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" path="/var/lib/kubelet/pods/d615ba96-0f57-4307-9f27-f20d5b04ea2d/volumes" Nov 28 18:02:00 crc kubenswrapper[4710]: I1128 18:02:00.142661 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:02:00 crc kubenswrapper[4710]: E1128 18:02:00.143800 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:02:15 crc kubenswrapper[4710]: I1128 18:02:15.141912 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:02:15 crc kubenswrapper[4710]: E1128 18:02:15.142638 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:02:28 crc kubenswrapper[4710]: I1128 18:02:28.141519 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:02:28 crc kubenswrapper[4710]: E1128 18:02:28.158451 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.036822 4710 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gjs77"] Nov 28 18:02:40 crc kubenswrapper[4710]: E1128 18:02:40.038305 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerName="registry-server" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.038339 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerName="registry-server" Nov 28 18:02:40 crc kubenswrapper[4710]: E1128 18:02:40.038380 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerName="extract-utilities" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.038398 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerName="extract-utilities" Nov 28 18:02:40 crc kubenswrapper[4710]: E1128 18:02:40.038484 4710 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerName="extract-content" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.038504 4710 state_mem.go:107] "Deleted CPUSet assignment" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerName="extract-content" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.039094 4710 memory_manager.go:354] "RemoveStaleState removing state" podUID="d615ba96-0f57-4307-9f27-f20d5b04ea2d" containerName="registry-server" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.043174 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.047298 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gjs77"] Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.149384 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-catalog-content\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.149548 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lznsw\" (UniqueName: \"kubernetes.io/projected/df77e309-3971-4360-ac62-c43ff0ba9888-kube-api-access-lznsw\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.149604 4710 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-utilities\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.252117 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-catalog-content\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.252256 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lznsw\" (UniqueName: \"kubernetes.io/projected/df77e309-3971-4360-ac62-c43ff0ba9888-kube-api-access-lznsw\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.252294 4710 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-utilities\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.252589 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-catalog-content\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.252969 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-utilities\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.279530 4710 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lznsw\" (UniqueName: \"kubernetes.io/projected/df77e309-3971-4360-ac62-c43ff0ba9888-kube-api-access-lznsw\") pod \"certified-operators-gjs77\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.395649 4710 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:40 crc kubenswrapper[4710]: I1128 18:02:40.964575 4710 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gjs77"] Nov 28 18:02:41 crc kubenswrapper[4710]: I1128 18:02:41.159259 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:02:41 crc kubenswrapper[4710]: E1128 18:02:41.159745 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:02:41 crc kubenswrapper[4710]: I1128 18:02:41.197903 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjs77" event={"ID":"df77e309-3971-4360-ac62-c43ff0ba9888","Type":"ContainerStarted","Data":"c6302f58812a4b6185bf5eefcc3d9aa01530d145604c2400db0783a82ffa158d"} Nov 28 18:02:42 crc kubenswrapper[4710]: I1128 18:02:42.219138 4710 generic.go:334] "Generic (PLEG): container finished" podID="df77e309-3971-4360-ac62-c43ff0ba9888" containerID="4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3" exitCode=0 Nov 28 18:02:42 crc kubenswrapper[4710]: I1128 18:02:42.219458 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjs77" event={"ID":"df77e309-3971-4360-ac62-c43ff0ba9888","Type":"ContainerDied","Data":"4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3"} Nov 28 18:02:42 crc kubenswrapper[4710]: I1128 18:02:42.223271 4710 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 28 18:02:43 crc kubenswrapper[4710]: I1128 18:02:43.237882 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjs77" event={"ID":"df77e309-3971-4360-ac62-c43ff0ba9888","Type":"ContainerStarted","Data":"ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8"} Nov 28 18:02:44 crc kubenswrapper[4710]: I1128 18:02:44.256388 4710 generic.go:334] "Generic (PLEG): container finished" podID="df77e309-3971-4360-ac62-c43ff0ba9888" containerID="ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8" exitCode=0 Nov 28 18:02:44 crc kubenswrapper[4710]: I1128 18:02:44.256492 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjs77" event={"ID":"df77e309-3971-4360-ac62-c43ff0ba9888","Type":"ContainerDied","Data":"ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8"} Nov 28 18:02:45 crc kubenswrapper[4710]: I1128 18:02:45.276897 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjs77" event={"ID":"df77e309-3971-4360-ac62-c43ff0ba9888","Type":"ContainerStarted","Data":"044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5"} Nov 28 18:02:45 crc kubenswrapper[4710]: I1128 18:02:45.318311 4710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gjs77" podStartSLOduration=2.6737852 podStartE2EDuration="5.318278034s" podCreationTimestamp="2025-11-28 18:02:40 +0000 UTC" firstStartedPulling="2025-11-28 18:02:42.222995007 +0000 UTC m=+3851.481295062" lastFinishedPulling="2025-11-28 18:02:44.867487811 +0000 UTC m=+3854.125787896" observedRunningTime="2025-11-28 18:02:45.308189949 +0000 UTC m=+3854.566490034" watchObservedRunningTime="2025-11-28 18:02:45.318278034 +0000 UTC m=+3854.576578119" Nov 28 18:02:50 crc kubenswrapper[4710]: I1128 18:02:50.396239 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:50 crc kubenswrapper[4710]: I1128 18:02:50.396982 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:50 crc kubenswrapper[4710]: I1128 18:02:50.471443 4710 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:51 crc kubenswrapper[4710]: I1128 18:02:51.441500 4710 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:51 crc kubenswrapper[4710]: I1128 18:02:51.519146 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gjs77"] Nov 28 18:02:52 crc kubenswrapper[4710]: I1128 18:02:52.147508 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:02:52 crc kubenswrapper[4710]: E1128 18:02:52.147827 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:02:53 crc kubenswrapper[4710]: I1128 18:02:53.395964 4710 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gjs77" podUID="df77e309-3971-4360-ac62-c43ff0ba9888" containerName="registry-server" containerID="cri-o://044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5" gracePeriod=2 Nov 28 18:02:53 crc kubenswrapper[4710]: I1128 18:02:53.971300 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.102100 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-utilities\") pod \"df77e309-3971-4360-ac62-c43ff0ba9888\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.102199 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-catalog-content\") pod \"df77e309-3971-4360-ac62-c43ff0ba9888\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.102267 4710 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lznsw\" (UniqueName: \"kubernetes.io/projected/df77e309-3971-4360-ac62-c43ff0ba9888-kube-api-access-lznsw\") pod \"df77e309-3971-4360-ac62-c43ff0ba9888\" (UID: \"df77e309-3971-4360-ac62-c43ff0ba9888\") " Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.104657 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-utilities" (OuterVolumeSpecName: "utilities") pod "df77e309-3971-4360-ac62-c43ff0ba9888" (UID: "df77e309-3971-4360-ac62-c43ff0ba9888"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.110952 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df77e309-3971-4360-ac62-c43ff0ba9888-kube-api-access-lznsw" (OuterVolumeSpecName: "kube-api-access-lznsw") pod "df77e309-3971-4360-ac62-c43ff0ba9888" (UID: "df77e309-3971-4360-ac62-c43ff0ba9888"). InnerVolumeSpecName "kube-api-access-lznsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.150564 4710 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "df77e309-3971-4360-ac62-c43ff0ba9888" (UID: "df77e309-3971-4360-ac62-c43ff0ba9888"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.204589 4710 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-utilities\") on node \"crc\" DevicePath \"\"" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.204633 4710 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/df77e309-3971-4360-ac62-c43ff0ba9888-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.204650 4710 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lznsw\" (UniqueName: \"kubernetes.io/projected/df77e309-3971-4360-ac62-c43ff0ba9888-kube-api-access-lznsw\") on node \"crc\" DevicePath \"\"" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.410000 4710 generic.go:334] "Generic (PLEG): container finished" podID="df77e309-3971-4360-ac62-c43ff0ba9888" containerID="044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5" exitCode=0 Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.410098 4710 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gjs77" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.410075 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjs77" event={"ID":"df77e309-3971-4360-ac62-c43ff0ba9888","Type":"ContainerDied","Data":"044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5"} Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.410221 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gjs77" event={"ID":"df77e309-3971-4360-ac62-c43ff0ba9888","Type":"ContainerDied","Data":"c6302f58812a4b6185bf5eefcc3d9aa01530d145604c2400db0783a82ffa158d"} Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.410260 4710 scope.go:117] "RemoveContainer" containerID="044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.450784 4710 scope.go:117] "RemoveContainer" containerID="ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.461708 4710 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gjs77"] Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.472261 4710 scope.go:117] "RemoveContainer" containerID="4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.476423 4710 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gjs77"] Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.544943 4710 scope.go:117] "RemoveContainer" containerID="044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5" Nov 28 18:02:54 crc kubenswrapper[4710]: E1128 18:02:54.545415 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5\": container with ID starting with 044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5 not found: ID does not exist" containerID="044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.545586 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5"} err="failed to get container status \"044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5\": rpc error: code = NotFound desc = could not find container \"044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5\": container with ID starting with 044077a4b7bd0c48934344e3e90858155a9b9801dcead6eb446dbb45fe67b7a5 not found: ID does not exist" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.545703 4710 scope.go:117] "RemoveContainer" containerID="ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8" Nov 28 18:02:54 crc kubenswrapper[4710]: E1128 18:02:54.546140 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8\": container with ID starting with ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8 not found: ID does not exist" containerID="ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.546187 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8"} err="failed to get container status \"ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8\": rpc error: code = NotFound desc = could not find container \"ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8\": container with ID starting with ef7f3516c2ac916d102ea0fd10d3387bbe6b81975291f1683d2093022e99e3c8 not found: ID does not exist" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.546223 4710 scope.go:117] "RemoveContainer" containerID="4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3" Nov 28 18:02:54 crc kubenswrapper[4710]: E1128 18:02:54.546558 4710 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3\": container with ID starting with 4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3 not found: ID does not exist" containerID="4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3" Nov 28 18:02:54 crc kubenswrapper[4710]: I1128 18:02:54.546603 4710 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3"} err="failed to get container status \"4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3\": rpc error: code = NotFound desc = could not find container \"4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3\": container with ID starting with 4923e9c106b01d865ec10f7f3f698788e2cf46d078b1110f826a0f702c9669d3 not found: ID does not exist" Nov 28 18:02:55 crc kubenswrapper[4710]: I1128 18:02:55.165613 4710 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df77e309-3971-4360-ac62-c43ff0ba9888" path="/var/lib/kubelet/pods/df77e309-3971-4360-ac62-c43ff0ba9888/volumes" Nov 28 18:03:07 crc kubenswrapper[4710]: I1128 18:03:07.143861 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:03:07 crc kubenswrapper[4710]: E1128 18:03:07.145003 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:03:21 crc kubenswrapper[4710]: I1128 18:03:21.160450 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:03:21 crc kubenswrapper[4710]: E1128 18:03:21.161442 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:03:36 crc kubenswrapper[4710]: I1128 18:03:36.142028 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:03:36 crc kubenswrapper[4710]: E1128 18:03:36.143049 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:03:50 crc kubenswrapper[4710]: I1128 18:03:50.141514 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:03:50 crc kubenswrapper[4710]: E1128 18:03:50.142992 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:04:03 crc kubenswrapper[4710]: I1128 18:04:03.141867 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:04:03 crc kubenswrapper[4710]: E1128 18:04:03.142939 4710 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-9mscc_openshift-machine-config-operator(4ca87069-1d78-4e20-ba15-f37acec7135b)\"" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" podUID="4ca87069-1d78-4e20-ba15-f37acec7135b" Nov 28 18:04:17 crc kubenswrapper[4710]: I1128 18:04:17.142126 4710 scope.go:117] "RemoveContainer" containerID="637810e5753f3a77149682075154343c8b959ab2f810a349cf6345f1784788db" Nov 28 18:04:17 crc kubenswrapper[4710]: I1128 18:04:17.454483 4710 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-9mscc" event={"ID":"4ca87069-1d78-4e20-ba15-f37acec7135b","Type":"ContainerStarted","Data":"2b03bcedc4a3a6a1a4e6410aa93cb9b8650917fcbc9a911ef4ee1d983898a3e8"}